Jonäll, Kristina2024-10-022024-10-022024-10-02https://hdl.handle.net/2077/83561Purpose: The purpose of this study is to investigate how an AI model compares to an educator in assessing short essay responses in terms of consistency. The study further explores whether the AI model can provide students with useful feedback on their submissions. Additionally, it aims to examine the perceptions of educators and students regarding the use of an AI model for these tasks in the context of higher education. Theory: This study utilises a sociocultural theoretical framework to analyse how cultural and social interactions influence educational processes, particularly in the context of AI integration in assessments. Emphasising the interplay between individual cognitive development and the sociocultural environment, the theory posits that learning is not only a personal cognitive process but also deeply embedded in social contexts and mediated through cultural tools. This perspective is crucial for understanding how students and educators perceive and interact with AI as a new educational tool, considering the historical, cultural, and individual factors that shape these perceptions. Method: The study utilised a mixed-methods approach to investigate AI-based versus traditional human grading. Quantitative analysis was conducted using data from short essay answers of students from three Business Administration courses, focusing on objective measures like the consistency and accuracy of AI assessments, as well as the usefulness and specificity of feedback provided by the AI-model. Qualitative insights were gathered through nine semi-structured interviews, comprising four students and five educators, to explore perceptions and experiences regarding AI grading and feedback. The quantitative assessment employed statistical methods to analyse the grading outcomes and compare the AI's performance to that of human educators. The qualitative component involved thematic analysis of the interview data to identify underlying themes about perceptions and experiences with AI assessment and feedback. This dual-method approach ensured a comprehensive evaluation of both empirical data and subjective insights. Results: The results of the study indicate that AI grading varied in consistency across different courses, aligning more closely with educator assessments in less complex questions but struggling with multifaceted questions. AI provided students with specific, useful, and appreciated feedback. Both educators and students had mixed feelings about AI grading: educators were concerned about losing qualitative insights and maintaining control over the grading process, while students appreciated the objectivity of AI but were sceptical about its reliability and missed human interaction. Overall, the study suggests that while AI can support certain aspects of grading and feedback, it cannot yet replace human judgment, advocating for a hybrid approach that combines AI and human assessments to enhance both efficiency and educational integrity. The study offers valuable insights into the perceptions and usefulness of AI-models in higher education assessment and feedback.engAI-based assessment, Artificial intelligence, feedback, sociocultural theory, mixed-methodArtificial Intelligence in Academic Grading: A Mixed-Methods StudyText