International Society for Technology in Education 2026 ISTE Conference & Expo; Orlando, FL
To learn more, visit https://conference.iste.org/2026/.
To learn more, visit https://conference.iste.org/2026/.
To learn more, visit https://www.cosn.org/cosn2026/.
To learn more, visit https://csedu.scitevents.org/.
To learn more, visit https://cue.org/page/conferences.
To learn more, visit https://sigcse2026.sigcse.org/.
Computational thinking (CT) is central to computer science, yet there is a gap in the literature on how CT emerges and develops in early childhood especially for children from historically marginalized communities. Yet, lack of access to computational materials and effective instruction can create inequities that have lasting effects on young children (Chaudry, et al., 2017). To alleviate the pervasiveness of such inequities and remedy the “pedagogical dominance of Whiteness” (Baines et al., 2018, p.
Computational thinking (CT) is central to computer science, yet there is a gap in the literature on how CT emerges and develops in early childhood especially for children from historically marginalized communities. Understanding how teachers provide asset-based, culturally responsive opportunities for CT in early childhood classrooms remains largely unknown. The purpose of this paper is to share a subset of findings from a qualitative, ethnographic study that explored the ways in which early childhood teachers (ECT) learned and implemented CT using asset-based pedagogies.
Computational thinking (CT) is central to computer science, yet there is a gap in the literature on the best ways to implement CT in early childhood classrooms. The purpose of this qualitative study was to explore how early childhood teachers enacted asset-based pedagogies while implementing CT in their classrooms. We followed a group of 28 early childhood educators who began with a summer institute and then participated in multiple professional learning activities over one year.
Computational thinking CT is central to computer science, yet there is a gap in the literature on the best ways to implement CT in early childhood classrooms. The purpose of this qualitative study was to explore how early childhood teachers enacted asset-based pedagogies while implementing CT in their classrooms.
This chapter features intersections of art, literacy, and creative computing. As a component of STEAM, creative computing augments story creation, or storymaking (Buganza et al., 2023; Compton & Thompson, 2018), prompting learners to explore expressive meaning making as collective interactions with texts. To signify a way of teaching that supports such learning activities, we propose expressive STEM as a design principle, illustrated here with examples from an elementary school and a preservice art education program in Texas, USA.
This chapter features intersections of art, literacy, and creative computing. As a component of STEAM, creative computing augments story creation, or storymaking (Buganza et al., 2023; Compton & Thompson, 2018), prompting learners to explore expressive meaning making as collective interactions with texts. To signify a way of teaching that supports such learning activities, we propose expressive STEM as a design principle, illustrated here with examples from an elementary school and a preservice art education program in Texas, USA.
Large language models (LLMs) have demonstrated strong potential in performing automatic scoring for constructed response assessments. While constructed responses graded by humans are usually based on given grading rubrics, the methods by which LLMs assign scores remain largely unclear. It is also uncertain how closely AI’s scoring process mirrors that of humans or if it adheres to the same grading criteria. To address this gap, this paper uncovers the grading rubrics that LLMs used to score students’ written responses to science tasks and their alignment with human scores.
Large language models (LLMs) have demonstrated strong potential in performing automatic scoring for constructed response assessments. While constructed responses graded by humans are usually based on given grading rubrics, the methods by which LLMs assign scores remain largely unclear. It is also uncertain how closely AI’s scoring process mirrors that of humans or if it adheres to the same grading criteria. To address this gap, this paper uncovers the grading rubrics that LLMs used to score students’ written responses to science tasks and their alignment with human scores. We also examine whether enhancing the alignments can improve scoring accuracy.
Large language models (LLMs) have demonstrated strong potential in performing automatic scoring for constructed response assessments. While constructed responses graded by humans are usually based on given grading rubrics, the methods by which LLMs assign scores remain largely unclear. It is also uncertain how closely AI’s scoring process mirrors that of humans or if it adheres to the same grading criteria. To address this gap, this paper uncovers the grading rubrics that LLMs used to score students’ written responses to science tasks and their alignment with human scores.
Large language models (LLMs) have demonstrated strong potential in performing automatic scoring for constructed response assessments. While constructed responses graded by humans are usually based on given grading rubrics, the methods by which LLMs assign scores remain largely unclear. It is also uncertain how closely AI’s scoring process mirrors that of humans or if it adheres to the same grading criteria. To address this gap, this paper uncovers the grading rubrics that LLMs used to score students’ written responses to science tasks and their alignment with human scores. We also examine whether enhancing the alignments can improve scoring accuracy.