GBSB Global’s Academic Leadership in AI Assessment Integration
At GBSB Global, we continually monitor and adapt to transformative developments in the educational landscape. One of the key challenges facing academic institutions today is the integration and management of artificial intelligence, particularly in relation to traditional assessment methods. Traditional assignment frameworks are increasingly vulnerable to content generated effortlessly by AI tools such as ChatGPT, Perplexity, or DeepSeek.
Rather than limiting technological tools, GBSB Global remains committed to fostering technological adaptability in students. Recognizing the impracticality of banning beneficial technologies – analogous to historical resistance to innovations such as the printing press or calculators – we face a fundamental question: how can we effectively assess student learning and academic integrity without restricting technological advancement? Addressing this challenge has led our institution to engage in dedicated research.
Based on insights gained at the Digital Universities 2025 conference (March 20-25), GBSB Global established a research team in April to develop and implement the GenAI-Enhanced Assessment Model (GAM). This interdisciplinary team, led by Dr. Wiktor Patena, Dean and Executive President of GBSB Global Business School, was formed to address the growing challenge of distinguishing between student-generated work and AI-generated content in assignments completed outside of controlled testing environments. The development of the GAM is in response to trends identified in recent educational technology research (Johnson et al., 2024; Martinez & Singh, 2025) that highlight the need to adapt assessment in the AI era.

Dr. Wiktor Patena
Dean and executive president
The proposed assessment model includes several evidence-based components. First, it requires transparent documentation of AI use that aligns with emerging best practices in academic integrity (Thompson, 2024). Students must explicitly detail how AI tools contributed to their assignments and include a critical, reflective analysis that evaluates the reliability, accuracy, and relevance of AI-generated content. This approach draws on established critical digital literacy frameworks (Williams & Chen, 2024) while ensuring that originality and authentic student input remain central to the assessment.
«In addition – Dr. Patena points out – the model implements a stratified review process in which selected students participate in oral defense sessions. These sessions, structured according to validated assessment rubrics, allow instructors to verify individual understanding beyond AI support. While this approach offers significant advantages for authentication, the team acknowledges implementation challenges related to faculty workload and standardization of assessment criteria that will need to be addressed in the pilot phase.»
The GBSB Global research team is currently refining the model through a structured development process to be completed by September 2025. This initiative includes a planned evaluation framework to measure effectiveness and student learning outcomes. The project builds on the scholarly discourse of the Digital Universities 2025 conference while contributing to the emerging literature on AI-integrated assessment methodologies. Through this research-based approach, Drs. Patena, Naaman, and Xuereb aim to position GBSB Global as a contributor to pedagogical innovation in the adaptation of higher education to artificial intelligence technologies.