Ideas for Solution Session
and Discussion Session
7. April 2025
•
2 Comments
To facilitate the exchange of ideas and enable the formation of diverse and multidisciplinary sessions regarding the conference calls, the submissions of solution and discussion sessions are accompanied by the voluntary public pre-submission of ideas followed by co-creation with interested participants. This allows you to connect proactively with other potential contributors in order to identify complementary viewpoints, to jointly explore complex issues from multiple perspectives, and to work collaboratively on a submission.
Contact
If you have any questions, please feel free to contact osc2025@conftool.pro.
Information on data privacy
This page is used to collect and comment ideas for solution / discussion sessions within the Calls for Contribution for the Open Science Conference 2025. You’re welcome to provide your email address, but it’s completely optional. If you do, it will be visible to others so they can get in touch with you. This page is moderated. Your comments are visible only after manual approval. All data will be deleted after May 23 (2025), the deadline for submissions.
This page is used to collect and comment ideas for solution / discussion sessions within the Calls for Contribution for the Open Science Conference 2025. You’re welcome to provide your email address, but it’s completely optional. If you do, it will be visible to others so they can get in touch with you. This page is moderated. Your comments are visible only after manual approval. All data will be deleted after May 23 (2025), the deadline for submissions.

Photo: Ralf Rebmann
2 Ideas
Our Research Sprints initiative proposes a novel collaborative methodology for addressing pressing AI ethics challenges through time-bounded, intensive global research collaborations. These sprints bring together diverse, multidisciplinary teams across geographies and expertise levels to rapidly develop ethical frameworks, guidelines, and practical tools for responsible AI development and deployment.
The Research Sprints model demonstrates how Open Science principles can significantly enhance AI ethics research by enabling:
1. Cross-cultural and multidisciplinary perspectives that capture global ethical concerns beyond Western frameworks
2. Rapid iteration of ideas with transparent documentation of the entire development process
3. Accessible participation structures that welcome both established researchers and emerging voices
4. Openly licensed outputs (CC-BY) that can be immediately implemented, adapted, and built upon
We aim to share both our methodology and concrete outcomes from recent sprints focused on transparency in large language models, bias detection protocols, and participatory AI governance structures. Our session will guide participants through a mini-sprint experience, collaboratively developing a framework for one pressing AI ethics challenge identified by attendees.
This solution session will demonstrate how the Research Sprints model can be adopted by other research groups to accelerate ethical AI practices while maintaining scientific rigor and inclusivity. We welcome contributors with expertise in sprint methodologies, AI ethics frameworks, or collaborative research models to join our submission.
As AI systems grow more autonomous, the alignment problem—ensuring these systems remain robustly aligned with human intent under changing conditions—has emerged as a critical research frontier. While many current approaches focus on post-hoc interpretability or human feedback loops, this session explores a deeper and more universal criterion for alignment: recursive self-correction within a structured functional space.
We propose that the ability to adapt to misalignment under distributional shift is not just a feature of intelligent systems, but a minimal requirement. Yet today’s alignment protocols (e.g., ELK, RLHF, safety tuning) often fail to recursively test their own assumptions when perturbed—whether via shifts in ontology, adversarial meta-agents, or recursive self-reference. This leads to a phenomenon we term functional opacity, where protocols pass surface-level checks but fail under recursive stress.
This session invites participants to collaboratively develop a framework for identifying and implementing minimally sufficient recursive diagnostic structures—i.e., alignment tests that test themselves. We will use open examples (such as the ELK diagnostic mentioned above) to highlight how recursive reasoning can surface hidden misalignment, and we will connect this logic to principles of open science including transparency, epistemic accountability, and reproducibility.
Participants will:
– Explore functional models of intelligence that treat recursive coherence as a core epistemic constraint;
– Collaboratively stress-test existing open-source alignment protocols using recursive perturbation logic;
– Co-design open, reusable test templates that assess whether alignment protocols satisfy the recursive correction criterion;
– Identify how these tests and models can be made publicly accessible, citeable (via Zenodo), and open-licensed (CC-BY) to seed broader engagement.
By the end of the session, we aim to produce:
– A jointly-authored draft of a reproducible test protocol for recursive coherence;
– A shared glossary and functional model visualization (CC-BY licensed);
– A continuation plan for embedding recursive diagnostics into open AI safety benchmarks.
This session directly supports open science by:
– Proposing a model-agnostic alignment evaluation framework that prioritizes reproducibility and generalizability;
– Encouraging epistemic transparency through recursive reasoning;
– Inviting interdisciplinary contributions from philosophy, systems engineering, and AI safety.
Session Structure (60 min):
15 min presentation:
– Introduce the recursive alignment failure mode using real examples (e.g., ELK under ontology shifts); explain the conceptual model (CME/HCFM).
35 min collaborative design workshop:
– Participants apply recursive diagnostics to existing open safety tools and explore how to model test sufficiency.
10 min documentation & outcome synthesis:
– Collective summary and next-step planning, supported by a shared note document (CC-BY 4.0).