The consortium behind the Erasmus+ project Digital Diversity: Crafting Inclusive AI Narratives (D2CIN) has successfully completed the key deliverable “Analysis of the Focus Group Interviews – Insights & Recommendations.” This milestone marks a significant step forward in shaping a new generation of inclusive, ethical, and pedagogically grounded AI tools for Vocational Education and Training (VET).
The analysis consolidates insights from two expert focus groups held in Germany and Bulgaria—each offering distinct yet complementary perspectives on the challenges and opportunities of embedding inclusive AI into educational practices. To deepen collaboration and ensure alignment, the partners convened in Germany to collectively review the findings and discuss the implications for the project’s upcoming outputs. During this visit, they also engaged with academic institutions such as XU University, presenting the project’s core objectives and raising awareness of inclusive AI practices within higher education.
A Cross-Disciplinary Evidence Base
The first focus group, held in Germany, brought together Inclusive Education Experts who explored pedagogical, ethical, and learner-centred considerations in AI-supported teaching. In parallel, the Bulgarian focus group engaged UX and Accessibility Designers, who contributed user-centric, technical, and design-oriented perspectives essential for developing accessible AI environments.
The combined insights from both groups create a rich, interdisciplinary evidence base that will guide the design of the Personalized Feedback Navigator and the development of the Handbook “Accessible AI for Education: A Comprehensive Guide to Inclusive Design and Training.”
Key Findings from the Analysis
1. Bias Mitigation and Ethical AI
Both expert groups emphasised that AI systems often reflect the assumptions, values, and blind spots of their creators. They identified the need for active bias detection, culturally sensitive representation, and transparent decision-making processes. The findings underscore the responsibility of both educators and designers to anticipate and mitigate bias throughout the development cycle.
2. Accessibility as a Foundational Requirement
Accessibility was one of the most strongly recurring themes, particularly highlighted by the Bulgarian UX and accessibility specialists. Participants stressed that accessibility is not an add-on but an essential design principle that must inform every aspect of AI-driven learning tools. This includes WCAG alignment, multimodal formats, compatibility with assistive technologies, and cognitive-friendly interactions.
3. Inclusive UX and Digital Storytelling
Participants explored the role of avatars, narratives, and design elements in shaping learner perception and engagement. The analysis found that inclusive storytelling is central to building trust and authenticity, with diverse representation handled sensitively and without stereotyping. Customization, when used appropriately, was identified as a key tool for supporting learner comfort and cultural relevance.
4. Explainability, Transparency, and User Trust
Clear communication about how AI systems operate—and why they adapt or respond in particular ways—was identified as essential for building trust. Both groups expressed concerns about overly complex personalisation options or hidden adaptation mechanisms that could undermine user confidence. Transparent explainability was deemed critical for promoting ethical, responsible AI adoption.
5. Adaptive Content: Potential and Risks
While adaptive content can enhance learning by adjusting pace and difficulty, participants warned that it may also reduce transparency or learner autonomy if not carefully implemented. The study concluded that adaptation should always be visible, reversible, and accompanied by clear explanations.
The insights from this analysis will directly shape two major outcomes of the project:
The Personalized Feedback Navigator – a multilingual conversational AI avatar that will provide tailored guidance on bias mitigation, accessibility integration, ethical practice, and adaptive content transparency. The Navigator will act as a practical support tool for educators and designers seeking to improve the inclusiveness of their AI-driven training materials.
The Handbook “Accessible AI for Education” – a comprehensive guide that will translate research findings into actionable strategies, methodologies, and checklists for educators, VET institutions, and designers working with conversational AI in training contexts.
Together, these outputs will empower VET professionals to integrate inclusive, ethical, and human-centred AI into their teaching practices, supporting Europe’s broader digital transformation agenda.
Strengthening Collaboration Across Borders
The consortium meeting in Germany allowed partners to collectively evaluate the focus group results and establish a united direction for the next project phases. Presentations to academic institutions such as XU University demonstrated growing interest in the project’s contribution to AI ethics, digital inclusivity, and innovative educational approaches.
The successful completion of this analysis signals the project’s readiness to advance toward the next stages of development, ensuring that all upcoming tools and resources are deeply grounded in expert input, interdisciplinary perspectives, and real educational needs.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them.Project No: 2025-1-DE02-KA210-VET-000354956



