Practical Solutions: Abstracts

1. The Turing Way – a book, a community, a global collaboration
Arielle Bennett1,3, Malvika Sharan1,3, Emma Karoune1,3, Esther Plomp2,3, Kirstie Whitaker1,3
Organisation(s): 1: The Alan Turing Institute; 2: TU Delft; 3: The Turing Way Project

The Turing Way (TTW) is an open research handbook on data science and more. Hosted at The Alan Turing Institute and co-created by a global community, TTW fosters open, reproducible and collaborative practices. Over 300 chapters are written across five guides on Reproducible Research, Project Design, Communication, Collaboration and Ethical Research. These provide all the information that stakeholders of research need throughout the project’s lifecycle to ensure that their research is ethical, inclusive and reproducible. Resources are hosted openly on GitHub and the project is supported through dedicated working groups on governance, peer-review, training, localisation and infrastructure. A Community Handbook enables the reuse of its community resources and infrastructure to build similar projects for domain-specific audiences.

The Turing Way team utilises collaborative approaches that empower diverse individuals to represent different realities in research, and amplify voices from marginalised communities.  Co-led by Malvika Sharan and Kirstie Whitaker, the core team draws experts and community champions from within the Turing and from international organisations, representing manifold research expertise and professional backgrounds across the globe. They have provided a working example of team science, defining the best practices and pathways for conducting responsible research across sectorsand fields.

This submission highlights TTW as a practical solution to open science. As a book, a community and a global collaboration, TTW provides a blueprint for collaborative and inclusive data science, by both (1) providing guides covering a range of practices, and (2) exemplifying how open, community-led projects are conducted through the process of developing its resources.

2. Design and Implementation of an Open Peer Review Process for Posters: Addressing Challenges and Sharing Insights
Angelos Konstantinidis
Organisation(s): University of Groningen

Open peer review has the potential to increase the quality and transparency of educational events, but it also poses several challenges. This presentation discusses the design and implementation of an open peer review process for posters, highlighting methods for addressing practical, cultural, and communicational challenges.

The context of the study is the annual Education Festival (EF) that is organized by the Teaching Academy Groningen and University of Groningen. The EF 2023 featured keynote speakers, onsite workshops and online presentations, and a poster exhibition on the theme ‘Collaboration in Teaching and Learning’. One of the aims of the EF 2023 was to bring together teachers to discuss educational practices, share ideas and approaches, and create connections between them. This was the first year that a poster exhibition was organized inviting teachers to submit posters based on their teaching practices or educational research.

In this presentation, I begin by identifying the main challenges in implementing an open peer review process. Next, I discuss the strategy to address these challenges and I present the methods and measures I developed and implemented to achieve a successful peer review process. Good communication with all stakeholders, careful organization of the procedure, and a clear framework were the three main pillars of the strategy. Practical measures included publishing a guide that included all relevant information about the peer review, and providing training to reviewers on providing constructive feedback. Finally, I analyse how authors and reviewers evaluated their experience and offer insights based on my experience.

3. Persistent identifiers for social science survey variables: an infrastructure developed to foster open science
Janete Saldanha Bach1, Claus-Peter Klas2, Peter Mutschke3
Organisation(s): 1: GESIS – Leibniz Institute for the Social Sciences; 2: GESIS – Leibniz Institute for the Social Sciences; 3: GESIS – Leibniz Institute for the Social Sciences

The presentation highlights data citation challenges in Social Sciences and the solution by KonsortSWD to enhance findability and accessibility of dataset elements like survey variables. Often, researchers use a subset of variables contained in a dataset, making the common practice of assigning a persistent identifier (PID) to an entire dataset insufficient. Despite citations of reused data, current citation practices lack standards, provide inadequate metadata and documentation, or refer to inaccessible datasets.

We introduce a registration service assigning PIDs to dataset elements, enabling reliable citation and reuse, developed in the framework of the NFDI consortium KonsortSWD. PIDs on a more granular level are central to the FAIR principles, advancing FAIR data management, credibility and reusability. The talk underscores the benefits of assigning PIDs to dataset elements. Being machine-actionable, PIDs aid in adhering to FAIR principles by increasing research traceability, enabling citation tracking, and promoting digital connections among researchers and research outputs.

We present four use cases demonstrating how partners make data citation easier. The presentation concludes with recommendations for future functionalities, such as automated access to variables and visualizing their relationships as an open research knowledge graph.

Our service expands the DOI registration agency for social and economic data, da|ra , utilizing the ePic API, supported by a set of compliant metadata schema.

The service was evaluated using the RDA FAIR Data Maturity Model framework, showing high compliance levels. Participants will learn the benefits of assigning PIDs below study level, enhancing citation transparency and Fairness of data.

4. ProTIS: A tool that facilitates monitoring data management and Open Science indicators in research projects
Vincent Brunst, Maisam M. Dadkan, Aristotelis Kandylas, Garrett Speed
Organisation(s): Faculty of Geosciences, Universiteit Utrecht

FAIR principles have been around for a while and are mostly being applied to research outputs. To provide high-quality support to the research community in data management, privacy compliance, Open Science, grant agreements deliverables, and project outputs such as Data Management Plans (DMP), privacy compliance documents, and data publication information must be FAIR and traceable during and after the funded project. Currently, each research support unit serving an organization entity has its own methods to track project deliverables.

We at the Faculty of Geosciences of Utrecht University are working on developing a Project Tracking Indication System (ProTIS) that interfaces existing data sources and enriches them with additional tracking information and capabilities. The system will help research support services to monitor the progress of running research projects and assign tasks to stakeholders to meet project requirements and deadlines. Research outputs such as papers and data publications, and (social-)media dissemination and engagement are also tracked in this system. These indicators will give insights on how and where the faculty can improve its commitment to Open Science. The ProTIS system and produced knowledge and expertise will be useful for the inter-departmental services of universities and other research institutes. While the primary idea and designing for ProTIS has been driven by the Geo Data Team at the faculty of Geosciences, we welcome input from other research data management staff, Open Science professionals, and programmers to develop this concept. Connecting these different levels and addressing the challenges that each level faces will be an important task for the SSH scientific landscape in the next few years. We want to engage with the audience to exchange ideas, best practice and lessons learned about the organisation of the research data landscape in other countries.

5. Advancing Open Science Skills: Insights from Curricular Courses at the University of Zurich
Melanie Röthlisberger
Organisation(s): University of Zurich

This presentation outlines and reflects on five curricular courses offered to Bachelor, Master and PhD students at the University of Zurich, focusing on Open Data and Open Access. These courses cater to students pursuing Bachelor’s, Master’s and PhD degrees, and are currently undergoing a transition to becoming Open Educational Resources (OER). Accommodating a diverse student body with varying scientific backgrounds, educational levels and research experiences poses certain difficulties during course development and delivery. At the same time, the presence of such a diversity fosters enriching discussions, a vibrant learning atmosphere, and facilitates fruitful exchanges among students. In addition to providing an overview of the curricular offerings, the presentation takes a closer look at the evaluative questions used to assess the efficacy of these courses and the courses’ usability for students’ educational advancement. Finally, the presentation also sheds light on the challenges faced while promoting the course within an institution that strives to enhance Open Science skills but contends with entrenched stereotypes and structural hierarchies.

6. The Research Software Directory: Show your research software to the world!
Maaike de Jong, Ewan Cahen, Jason Maassen
Organisation(s): Netherlands eScience Center

Research software plays a vital role in today’s research, but often this role is not recognized. The recently launched Research Software Directory (https://research-software-directory.org) is an online platform to discover, access, and share research software. It is available as a free service for any individual or organization who works with research software, across all domains. Developed by the Netherlands eScience Center and the Helmholtz Association, the Research Software Directory is open source.

With the Research Software Directory:

  • Researchers can quickly find and judge the relevance of research software
  • Research software engineers are encouraged to make their research software more visible, findable and accessible, promoting recognition of their work
  • Research organizations can showcase the software produced by their organization and monitor its reuse and impact

By combining metadata from various online sources (such as GitHub, Gitlab, Zenodo, DataCite, etc.) the Research Software Directory presents software in its academic and social context, including associated research projects, publications, data, blogs, contributors and more. The platform encourages proper citation of research software to ensure researchers and RSEs get credit for their work.

In this session, we will present the Research Software Directory and how it contributes to open science. We will also demonstrate how users can find and add software on the platform. Interested users can sign up to the service on the spot.

7. GoTriple. The European hub for social sciences and humanities
Sona Arasteh1, Emilie Blotière2
Organisation(s): 1: Max Weber Stiftung/OPERAS; 2: CNRS/Huma-Num

The discovery service GoTriple aims to provide a single, multilingual and multidisciplinary access point for SSH research. Developed by the TRIPLE project since 2019, the platform is now production-ready (TRL9-Actual system proven in operational environment) and offers various features and functions seeking to support and foster Open Science practices. GoTriple is part of the OPERAS service portfolio aiming to deliver transnational access to scholarly communication resources.

Through the multilingual vocabulary developed for the platform GoTriple grants access to scholarly publications in 11 languages. Several innovative features aim explicitly to enhance collaboration among various disciplines and languages: The integrated Trust Building System (TBS) connects academics with Small and Medium Enterprises. Pundit, the open annotation tool, allows users to annotate documents and to share annotations. Additionally, by registering on the platform users are not only able to claim articles written by them but to display their expertise, their willingness to collaborate and the projects they are involved in. To make their profiles discoverable GoTriple also enables users to search for profiles and projects. An integrated chat system that allows users to get in touch with each other directly and form discussion groups.

The intended demo of the platform seeks to highlight and explain the above-mentioned aspects of the platform. The platform has been designed in close collaboration with the research community and with potential users that are non-academics. During the demo, we will use examples of use cases. We will also show how the platform allows researchers to enhance their work and benefit from the new collaborative opportunities.

8. Achieving granularity and accountability for author contributions with MeRIT
Malgorzata Lagisz
Organisation(s): University of New South Wales Sydney

Method Reporting with Initials for Transparency (MeRIT) extends and complements CRediT (Contributor Roles Taxonomy) by using the author’s initials in the methods section to further clarify authorship roles for reproducibility and replicability. This is especially important in the context of the growing number of multi-author empirical publications, where CRediT may not provide sufficient resolution of methodological contributions. By more accurately tracking contributor’s roles and responsibilities in the actual research process and implementation, MeRIT recognises the collaborative nature of modern research. It also points explicitly to the contributors who are best placed to provide additional clarifications and details essential for study replication and reproducibility. Adoption of MeRIT will not only prevent the diffusion of responsibility, but also a fairer way of crediting people for their individual contributions. As such, MeRIT supports four out of the five Hong Kong Principles: 1) assessing responsible research practices, 2) valuing complete reporting, 3) rewarding the practice of Open Science and 4) acknowledging a broad range of research activities. Consequently, MeRIT contributes to supporting equity, diversity, and inclusiveness in science. Crucially, MeRIT is very easy to implement and flexible in its format. An article describing MeRIT is now published. To help with the implementation, we have set up a website for MeRIT with FAQ and examples, available at www.merit.help.

9. Use it or Lose it: Facilitating the Use of Interactive Data Apps (IDAs) in Psychological Research Data Sharing
Franziska Usée
Organisation(s): Philipps-Universität Marburg

Open Data and the use of Open Source Software are two key principles of Open Science. However, the mere online availability of data and source code does not guarantee their reuse by other researchers. At the same time, sharing large data sets in an understandable and transparent format that motivates researchers to explore said data sets remains a fundamental challenge. Interactive data apps (IDAs) have the potential of making scientific data sets more accessible and attractive, both within and beyond the academic research community. Specifically, in times of information overload, soaring time constraints, and often underdeveloped programming skills, IDAs may increase researchers’ willingness and capability to engage with large data sets and reuse them efficiently. Here, I aim to demonstrate the use of IDAs for reducing barriers toward data reuse in psychological research and provide the code of two exemplary applications that may readily be adapted to other contexts (https://osf.io/hcznj/). In doing so, I capitalize on two open-source Python frameworks, namely, Dash and Gradio. Both frameworks enable users to present and share their research data in a highly interactive and easily understandable manner. Once implemented in Python, the IDA can either be hosted locally by using the Terminal on macOS/Linux systems or the Windows Command Prompt or externally, for example, on www.pythonanywhere.com or using Hugging Face Spaces. Whereas the former is especially useful during development, the latter allows for easily sharing the IDA with others, for example alongside research papers.

10. Open up your Research, a game on Open Science
Katherine Hermans
Organisation(s): University of Zurich

During this interactive networking session, we showcase an interactive game that we developed to educate people about Open Science. The game takes participants through a set of questions, which they can answer via a mobile device, rather than play the whole game, we will discuss the game on a meta level: is gamification a good way to introduce Open Science to a wider audience?

The game explores what open science entails, how open science practices can be applied, and how an open approach differs from more traditional research. In the game we follow Emma, a young researcher, on her way to a doctorate and must explore these questions. Should I write a data management plan? Pre-register my thesis? What is the advantage of making my data and code FAIR? Can’t I put this off until later? And where should I publish? At each stage of the research process, Emma must decide whether to practice an open science approach or go the traditional route.

The initial idea for this interactive game was developed because of the lack of suitable media that coherently presented open science practices in the research process from the perspective of the researchers.

11. Introducing Workflow-Integrated Data Documentation
Mio Hienstorfer-Heitmann, Leon Froehling, Arnim Bleier
Organisation(s): GESIS – Leibniz Institute for the Social Sciences

This paper proposes integrating an error-focused documentation approach for data collections into a typical data science workflow environment, Jupyter Notebooks, to assist computational social scientists in systematically documenting their data collection and analysis processes. While digital behavioural data collected from online platforms offer great opportunities for meaningful insights into human behaviour, these novel types of data are not designed but “found” and therefore require careful inspection and documentation before researchers can use them for academic research. The proposed tool, an extension for Jupyter Notebooks, adds a “documentation” cell to the existing “code” and “text” cells, where a catalogue of data documentation questions can be pasted. Researchers can answer these questions “on the fly” while defining, collecting and analysing their data, enhancing reflection on the data collection process and ameliorating data documentation. The tool guides researchers towards a more systematic reflection on concept definition and data collection, aiding in developing more robust and measurable concepts with respect to online behavioural data and enhances a more critical and thus reliable use of digital data. The documentation sheets produced may be published together with the dataset, thus making the research more transparent and reflective. The proposed tool is not only interesting for computational social science researchers but also for a broader community of professionals using digital data sources to make inferences about various topics and who require rigorous reflection about their data.

12. Beyond the BPC: the COPIM Project’s Tools and Infrastructures to Support Open Access Books
Tom Grady1, Joe Deville2, Rupert Gatti3
Organisation(s): 1: Birkbeck, University of London; 2: Lancaster University; 3: University of Cambridge

As national funder mandates increasingly include Open Access (OA) requirements for books in the UK and Europe, OA book publishing is becoming more widespread. Institutions are adapting to new ways of acquiring, cataloguing, and funding the publication of scholarly monographs.

But as well as offering new possibilities for reaching global audiences, OA books present new challenges: how can the sector sustain a diverse ecosystem of publishers of different sizes? How can we ensure the discoverability of OA publishers and books?

This session will be positioned in the context of policy developments from Plan S and the leaked EU Council statement on high processing fees charged to authors which are as much a problem for Humanities and Social Science monographs as they are for scientific journal publishing. We’ll highlight these challenges but also showcase some solutions being developed by the non-profit COPIM project, an international partnership of publishers, libraries and infrastructure providers. This is a rapidly developing OA world and COPIM is working on several non-commercial and community-led projects at the heart of this landscape.

We’ll present our OA book funding model (Opening the Future) in the context of other OA library funding programmes. We’ll also present the Open Book Collective (OBC) which offers a collective platform where libraries can find, assess and sign up to a range of OA packages from diverse publishers. And we’ll demonstrate the Thoth metadata management platform which is working to integrate open access books into institutional library and repository systems.

13. Mapping Open Science resources from around the world by discipline and principles
Jo Havemann
Organisation(s): Access 2 Perspectives

Designed as a living document open for contribution by individuals and institutions alike, The interactive Map of Open Science resources contains more than 700 items including articles highlighting established or emerging Open Science practices per discipline and research field; Open Access/Open Data/Open Peer Review / Open Source Software and Hardware / etc with many of relevant tools, services, networks, best practices, and guidelines, searchable by research discipline, world region, country, language and Open Science principle and more.

Primarily developed for practicing researchers from around the world, the map allows them to adapt their workflows and embrace the resources they can find in this map that are relevant and applicable to their research topic. Other scholarly stakeholders may as well use the map as reference to inform researchers and gain a better overview of the current diversity of available resources. Clicking on one or the other node opens a panel with additional information about the resource, available languages, the owner or hosting institution, and more.

The map is described at its DOI: 10.21428/51e64700.893d7337; dataset doi: 10.5281/zenodo.7554848; subject to versioning. Additional resources can be added via the submission form https://forms.gle/rAG7Pu56Z8Dt9fpn8).

14. Octopus.ac: A new approach to scientific publishing
Timothy Alan Fellows1, John Kaye1, Alexandra Freeman2
Organisation(s): 1: Jisc; 2: Octopus Publishing CIC

Octopus.ac is a new scientific publishing model, designed to encourage, enable, and reward best practice using 21st century tools. Free to read, free to publish, and entirely open source, this is a new way to register research that is fast, free and fair.

We believe that many of the problem in academic publishing stem from one principal issue: that journals are being pulled in two different directions – the dissemination of findings to practitioners and general audiences, and being the primary research record of what has been done, when and by whom, in detail, for the benefit of specialists. This leads to key scientific content being dismissed to supporting appendixes, while researchers try to write their results in a highly-narrative, attention- grabbing way which maximises ‘impact’.

And that’s where Octopus comes in. Moving away from the traditional journal paper, Octopus.ac uses smaller publication units which more closely align with the scientific process. The platform is designed to be the new primary research record for the scientific community, to create a new culture of collaboration and recognition, improving access to research outputs and resetting the academic incentive structure to reward best practice and recognise specialisation.

In this session, we will discuss some of the problems with the established publishing model, how Octopus seeks to solve them by espousing the principles of open science, and the challenges that Octopus faces in a landscape dominated by incentive structures based around impact.

15. The German Reproducibility Network – Let’s collaborate to implement Open Science practices in Germany
Maximilian Frank, Verena Heise
Organisation(s): German Reproducibility Network (GRN)

In this contribution we introduce the German Reproducibility Network (GRN) to the Open
Science community. The main aim of the GRN is to increase trustworthiness and transparency
of scientific research in Germany and beyond as part of a wider community of international
reproducibility networks. Founded in 2020 the GRN is a cross-disciplinary network of currently
33 members ranging from local reproducibility initiatives, often led by early career researchers,
to academic institutions and academic societies.

We are convinced that the implementation of strategies for research improvement, which
includes Open Science practices, can only be achieved through collaboration and information
sharing between those who are actively engaged in changing the research landscape.
Therefore, we present the different ways how members of the Open Science community can get
involved in the GRN and can benefit from others’ experiences in changing the status quo.
Through developing this community further, the GRN aims to become an important political
player in the German academic system that can act as a rallying point for campaigns for
research improvement. Our current activities range from sharing good practices in training and
education to incentivizing reproducible and Open Science practices at academic institutions,
e.g., through changing the curriculum or developing appropriate strategies for appointment
procedures for professorships. We give an overview of the last projects including our statement
about the importance of Open Data as well as on the link between reliable research and good
working conditions in academia and present future projects with other international
Reproducibility Networks.

16. Open up with a new Copyright License Policy for Collection Digitization
Elisa Herrmann, Frederik Berger, Falko Glöckler, Anke Hoffmann, Jana Hoffmann, Mareike Petersen, Christiane Quaisser, Franziska Schuster, Nadja Tata
Organisation(s): Museum für Naturkunde – Leibniz Institute for Evolution and Biodiversity Science

Over the past decade, the Museum für Naturkunde Berlin has developed into an integrated research museum with collection-based leading-edge research, a globally unique research-led collection and innovative science-based knowledge transfer.

The data from the Collection Discovery and Development project, embedded in the museum’s Future Plan, should therefore bring the greatest possible added value to the scientific community as well as to society as a whole. Up to 2022, however, there has been no uniform copyright status for the media and data produced and published in the course of digitization activities and thus a clear, easily understandable legal certainty for users, cooperation partners and data deliveries was missing. The new Copyright License Policy for Media and Data from the Mass Digitization Process aims to change this. It is embedded in the existing and planned policies in the sense of open science at the Museum für Naturkunde Berlin and is intended as a guideline for further efforts towards more transparency and open research.

The path there was not always easy in the field of tension between open research on the one hand and protection of the collection and biodiversity on the other hand.

The presentation outlines the genesis of the policy, the main points of discussion in its creation, as well as its implementation and what it has done for the opening of the collection as a result.

17. Sharing practices of software artifacts and source code for reproducible research
Claire Jean-Quartier, Miguel Rey-Mazón, Alexander Gruber, Hermann Schranzhofer, Ilire Hasani-Mavriqi
Organisation(s): Graz University of Technology

While source code of software and algorithms depicts an essential component in all fields of modern research involving data analysis and processing steps, it is uncommonly shared upon publication of results throughout disciplines. Simple
guidelines to generate reproducible source code have been published, still, code optimization supporting its repurposing to different settings is often neglected and even less thought of to be registered in catalogues for a public reuse. Though, all research output should be reasonably curated in terms of reproducibility, researchers are frequently non-compliant with availability statements in their publications, so that only a seventh of authors responded to requests and less than 7% positively reacted to the inquiry which has improved over the last years. These do not include the use of persistent unique identifiers that would allow referencing archives of code artifacts at certain versions and time for long lasting links to research articles. In this work, we provide a meta-analysis on current practices of authors in open scientific journals in regard to code availability indications, FAIR principles applied to code and algorithms, and present common repositories of choice among authors. We advocate proper description, archiving and referencing of source code and methods as part of the scientific knowledge, supported by tutorials and institutional data stewardship providing guidance, implementation of policies mandating the availability of research data and code, also appealing to editorial boards and reviewers for supervision.

18. DINA – An Open Source System for the Management of Natural Science Collections and Related Research Data
Falko Glöckler1, Christian Bölling1, James Macklin2, David Shorthouse2
Organisation(s): 1: Museum für Naturkunde Berlin; 2: Agriculture and Agri-Food Canada

DINA (“DIgital information system for NAtural history data”, https://dina-project.net) is a framework for like-minded practitioners of natural science collections to collaborate on the development of distributed, open source software that empowers and sustains collections management. Target collections include zoology, botany, mycology, geology, paleontology, and living collections. DINA is capable of managing both living and preserved specimens and serving the needs of users who conduct specimen- and sample-based research.

The DINA Consortium focuses on an open source software philosophy and on community-driven open development. Contributors share their development resources and expertise for the benefit of all participants. The DINA System is explicitly designed as a coupled set of web-enabled modules. At its core, this modular ecosystem includes strict guidelines for the structure of Web application programming interfaces (APIs), which guarantees the interoperability of all components (https://github.com/DINAWeb). These APIs are available for users with specialized needs as well in client libraries. A dedicated public demo instance is available to allow the audience to “play” and try out the system.

One of the reasons for the DINA collection management system is to better model complex relationships between collection objects involving their derivatives and related research data, and to document their provenance. We will demonstrate DINA’s open development principles and illustrate its innovative approach relevant for a good practice in research data management. Furthermore, we will highlight the open collaboration and exchange in the DINA consortium in order to inspire comparable endavors in other scientific domains.

19. Open Research Europe: Innovations and Developments on the European Commission’s
open research platform

Sam Hall
Organisation(s): F1000

In 2021, the European Commission launched Open Research Europe (ORE), a publishing
platform in collaboration with F1000. ORE is an open access publishing platform for
research resulting from Horizon 2020 and Horizon Europe funding, covering all subjects in
science, technology, engineering, and mathematics, as well as social sciences, arts, and
humanities. The platform utilises a post-publication peer-review model, offering an open and
cost-effective publishing solution that supports open science practices and transparency in
the publishing process.

ORE provides a broad range of metrics to measure the scientific and social impact of
articles and offers information on their use and re-use. Furthermore, it enables researchers
to comply with their funding requirements for immediate open access and open data,
without any additional cost. Many of ORE’s features and policies address key issues and
challenges currently being discussed within the scholarly publishing ecosystem, as well as
the open science movement in general. Such features include an array of innovative article
types which enable authors to publish research throughout their project’s development. The
open peer review model facilitates constructive discussion and greater transparency
between author, reviewer, and reader.

Since the platform’s launch, there have been numerous changes, which this session aims to
cover. The session will also report on the growth of the platform’s use, the number of
published article types by research area, as well as the geographical spread of authors.
Whether it’s your first-time hearing about ORE or you think you’re already familiar with it,
come and learn about recent developments to the publishing platform and what the future
holds. This session aims to provide an overview of the publishing model offered by ORE and
promote discussion around its various elements and their effectiveness in solving current
issues with the publishing system.

20. An Implementation of Open Research Data Infrastructure through the Data Intensive Research Initiative of South Africa (DIRISA)
Nobubele Angel Shozi, Ntlharhi Baloyi
Organisation(s): Data Intensive Research Initiative of South Africa (DIRISA)

A culture still exists in South Africa in which research data is not findable and not shared. Instead, a number of researchers are performing research of value in silo’s. Access to the data is not provided and the data is not assigned with findable identifiers to ensure that FAIR practices are implemented. A need arose in South Africa to create an institution that would ensure that research data management is implemented correctly across South Africa. The purpose of this presentation is to highlight the open research data initiative that has been carried out by the Data intensive research initiative of South Africa (DIRISA). The role of DIRISA is to ensure that all South African researchers have proper research data infrastructure to facilitate adequate research data management and open data practices. DIRISA is a sustainable and cross-disciplinary research data infrastructure that provides several research data management services for planning, storing, accessing, sharing and preserving research data. DIRISA ensures that all South African researchers from any research area are able to find and access data in a trusted and secure environment that adheres to the current Protection of Personal Information (POPI) Act of South Africa. this talk presents how DIRISA has implemented Open Research Data Infrastructure for the South African environment. The various services and applications that DIRISA provides will be highlighted. This talk will also provide guidance to other African countries towards the implementation of similar infrastructures. Challenges with respect to establishing this infrastructure will also be discussed.

21. Facilitating FAIR Data and Open Science in the Social Sciences and Humanities domain – An example of the Dutch science landscape
Nicole Emmenegger1, Nils Arlinghaus1, Ricarda Braukmann2, Loek Brinkman2
Organisation(s): 1: TDCC Social Sciences and Humanities; 2: DANS

In 2023, thematic Digital Competence Centres (TDCCs) were established in the Netherlands to bring together different stakeholders to support the FAIRness (Findable, Accessible, Interoperable and Reusable) of research data and software. The TDCCs cover three scientific domains: Natural and Engineering Sciences (NES), Life and Health Sciences (LHS) and Social Sciences and Humanities (SSH), which we take as an example.

The TDCC-SSH has identified a number of bottlenecks that are currently hindering SSH researchers in making outputs FAIR and openly available for reuse. Over the next five years, the TDCC-SSH will work to tackle these bottlenecks in collaboration with infrastructure providers, digital repositories, research performing organisations, and in particular, the local communities of data stewards and researchers committed to Open Science.

In this session, we will look at the different levels of the Dutch SSH landscape and how they contribute to a FAIR and Open Science ecosystem in the Netherlands. The TDCC-SSH represents a coordinating role, while infrastructures and certified repositories like DANS operate across organisations, and data stewards at universities provide support for researchers at their institutes. Last but not least, researchers committed to adopting the principles of Open Science come together in local Open Science communities.

Connecting these different levels and addressing the challenges that each level faces will be an important task for the SSH scientific landscape in the next few years. We want to engage with the audience to exchange ideas, best practice and lessons learned about the organisation of the research data landscape in other countries.