General-purpose large language models (LLMs), are transforming programming education. These models can autonomously generate code, identify errors, suggest debugging strategies, and provide explanations. Recent advances, including Chain-of-Thought (CoT) training, have strengthened these capabilities. This study examines how non-STEM postgraduate students use LLMs in an introductory programming course. After attending an AI literacy (AIL) workshop covering generative AI (GAI) basics, ethics, and prompt engineering, participants completed questionnaires and Python exercises. Findings show strong reliance on LLMs, often with uncritical acceptance of their outputs. The results underscore the need to embed AI literacy into non-STEM curricula and promote a model of augmented intelligence, encouraging critical, ethical collaboration with AI systems.
Generative AI for Non-Techies: Empirical Insights into LLMs in Programming Education for Novice Non-STEM Learners
Scantamburlo, Teresa;Melonio, Alessandra
2025-01-01
Abstract
General-purpose large language models (LLMs), are transforming programming education. These models can autonomously generate code, identify errors, suggest debugging strategies, and provide explanations. Recent advances, including Chain-of-Thought (CoT) training, have strengthened these capabilities. This study examines how non-STEM postgraduate students use LLMs in an introductory programming course. After attending an AI literacy (AIL) workshop covering generative AI (GAI) basics, ethics, and prompt engineering, participants completed questionnaires and Python exercises. Findings show strong reliance on LLMs, often with uncritical acceptance of their outputs. The results underscore the need to embed AI literacy into non-STEM curricula and promote a model of augmented intelligence, encouraging critical, ethical collaboration with AI systems.I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



