Science

“Unnatural How Natural It Was”: Using a Performance Task and Simulated Classroom for Preservice Secondary Teachers to Practice Engaging Student Avatars in Scientific Argumentation

Facilitating discussions is a key approach that science teachers use to engage students in scientific argumentation. However, learning how to facilitate argumentation-focused discussions is an ambitious teaching practice that can be difficult to learn how to do well, especially for preservice teachers (PSTs) who typically have limited opportunities to tryout and refine this teaching practice.

Author/Presenter

Jamie N. Mikeska

Calli Shekell

Jennifer Dix

Pamela S. Lottero-Perdue

Lead Organization(s)
Year
2022
Short Description

Facilitating discussions is a key approach that science teachers use to engage students in scientific argumentation. However, learning how to facilitate argumentation-focused discussions is an ambitious teaching practice that can be difficult to learn how to do well, especially for preservice teachers (PSTs) who typically have limited opportunities to tryout and refine this teaching practice. This study examines secondary PSTs’ perceptions and engagement with a science performance task—used within an online, simulated classroom consisting of five middle school student avatars—to practice this ambitious teaching practice.

Examining Elementary Science Teachers' Responses to Assessments Tasks Designed to Measure Their Content Knowledge for Teaching About Matter and its Interactions

Despite the importance of developing elementary science teachers' content knowledge for teaching (CKT), there are limited assessments that have been designed to measure the full breadth of their CKT at scale. Our overall research project addressed this gap by developing an online assessment to measure elementary preservice teachers' CKT about matter and its interactions. This study, which was part of our larger project, reports on findings from one component of the item development process examining the construct validity of 118 different CKT about matter assessment items.

Author/Presenter

Jamie N. Mikeska

Dante Cisterna

Heena Lakhani

Allison K. Bookbinder

David L. Myers

Luronne Vaval

Lead Organization(s)
Year
2022
Short Description

Despite the importance of developing elementary science teachers' content knowledge for teaching (CKT), there are limited assessments that have been designed to measure the full breadth of their CKT at scale. Our overall research project addressed this gap by developing an online assessment to measure elementary preservice teachers' CKT about matter and its interactions. This study, which was part of our larger project, reports on findings from one component of the item development process examining the construct validity of 118 different CKT about matter assessment items.

Science Teaching and Learning in Linguistically Super-Diverse Multicultural Classrooms

American schools are becoming more linguistically diverse as immigrants and resettled refugees who speak various languages and dialects arrive at the United States from around the world. This demographic change shifts US classrooms toward super-diversity as the new norm or mainstream in all grade levels (Enright 2011; Park, Zong and Batalova 2018; Vertovec 2007). In super-diverse classroom contexts, students come from varied migration channels, immigration statuses, languages, countries of origin, and religions, which contribute to new and complex social configurations of the classroom.

Author/Presenter

Minjung Ryu

Jocelyn Elizabeth Nardo

Mavreen Rose S. Tuvilla

Camille Gabrielle Love 

Year
2022
Short Description

In super-diverse classroom contexts, students come from varied migration channels, immigration statuses, languages, countries of origin, and religions, which contribute to new and complex social configurations of the classroom. Super-diversity thus encourages educators and researchers to draw on nuanced understandings of the complexity that it brings to bear in educational settings and reconsider instructional approaches that we have believed to be effective. This chapter provides an insight into the complexity of teaching science in linguistically super-diverse classrooms with the case of Riverview High School.

MindHive: An Online Citizen Science Tool and Curriculum for Human Brain and Behavior Research

MindHive is an online, open science, citizen science platform co-designed by a team of educational researchers, teachers, cognitive and social scientists, UX researchers, community organizers, and software developers to support real-world brain and behavior research for (a) high school students and teachers who seek authentic STEM research experiences, (b) neuroscientists and cognitive/social psychologists who seek to address their research questions outside of the lab, and (c) community-based organizations who seek to conduct grassroots, science-based research for policy change.

Author/Presenter

Suzanne Dikker

Yury Shevchenko

Kim Burgas

Kim Chaloner

Marc Sole

Lucy Yetman-Michaelson

Ido Davidesco

Rebecca Martin

Camillia Matuk

Lead Organization(s)
Year
2022
Short Description

MindHive is an online, open science, citizen science platform co-designed by a team of educational researchers, teachers, cognitive and social scientists, UX researchers, community organizers, and software developers to support real-world brain and behavior research for (a) high school students and teachers who seek authentic STEM research experiences, (b) neuroscientists and cognitive/social psychologists who seek to address their research questions outside of the lab, and (c) community-based organizations who seek to conduct grassroots, science-based research for policy change.

AI for Tackling STEM Education Challenges

Artificial intelligence (AI), an emerging technology, finds increasing use in STEM education and STEM education research (e.g., Zhai et al., 2020b; Ouyang et al., 2022; Linn et al., 2023). AI, defined as a technology to mimic human cognitive behaviors, holds great potential to address some of the most challenging problems in STEM education (Neumann and Waight, 2020; Zhai, 2021). Amongst these is the challenge of supporting all students to meet the vision for science learning in the 21st century laid out, for example in the U.S.

Author/Presenter

Xiaoming Zhai

Knut Neumann

Joseph Krajcik

Year
2023
Short Description

To best support students in developing competence, assessments that allow students to use knowledge to solve challenging problems and make sense of phenomena are needed. These assessments need to be designed and tested to validly locate students on the learning progression and hence provide feedback to students and teachers about meaningful next steps in their learning. Yet, such tasks are time-consuming to score and challenging to provide students with appropriate feedback to develop their knowledge to the next level.

AI for Tackling STEM Education Challenges

Artificial intelligence (AI), an emerging technology, finds increasing use in STEM education and STEM education research (e.g., Zhai et al., 2020b; Ouyang et al., 2022; Linn et al., 2023). AI, defined as a technology to mimic human cognitive behaviors, holds great potential to address some of the most challenging problems in STEM education (Neumann and Waight, 2020; Zhai, 2021). Amongst these is the challenge of supporting all students to meet the vision for science learning in the 21st century laid out, for example in the U.S.

Author/Presenter

Xiaoming Zhai

Knut Neumann

Joseph Krajcik

Year
2023
Short Description

To best support students in developing competence, assessments that allow students to use knowledge to solve challenging problems and make sense of phenomena are needed. These assessments need to be designed and tested to validly locate students on the learning progression and hence provide feedback to students and teachers about meaningful next steps in their learning. Yet, such tasks are time-consuming to score and challenging to provide students with appropriate feedback to develop their knowledge to the next level.

AI for Tackling STEM Education Challenges

Artificial intelligence (AI), an emerging technology, finds increasing use in STEM education and STEM education research (e.g., Zhai et al., 2020b; Ouyang et al., 2022; Linn et al., 2023). AI, defined as a technology to mimic human cognitive behaviors, holds great potential to address some of the most challenging problems in STEM education (Neumann and Waight, 2020; Zhai, 2021). Amongst these is the challenge of supporting all students to meet the vision for science learning in the 21st century laid out, for example in the U.S.

Author/Presenter

Xiaoming Zhai

Knut Neumann

Joseph Krajcik

Year
2023
Short Description

To best support students in developing competence, assessments that allow students to use knowledge to solve challenging problems and make sense of phenomena are needed. These assessments need to be designed and tested to validly locate students on the learning progression and hence provide feedback to students and teachers about meaningful next steps in their learning. Yet, such tasks are time-consuming to score and challenging to provide students with appropriate feedback to develop their knowledge to the next level.

Applying Machine Learning to Automatically Assess Scientific Models

Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners.

Author/Presenter

Xiaoming Zhai

Peng He

Joseph Krajcik

Year
2022
Short Description

Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners. This study utilized machine learning (ML), the most advanced artificial intelligence (AI), to develop an approach to automatically score student-drawn models and their written descriptions of those models.

Applying Machine Learning to Automatically Assess Scientific Models

Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners.

Author/Presenter

Xiaoming Zhai

Peng He

Joseph Krajcik

Year
2022
Short Description

Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners. This study utilized machine learning (ML), the most advanced artificial intelligence (AI), to develop an approach to automatically score student-drawn models and their written descriptions of those models.

Applying Machine Learning to Automatically Assess Scientific Models

Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners.

Author/Presenter

Xiaoming Zhai

Peng He

Joseph Krajcik

Year
2022
Short Description

Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners. This study utilized machine learning (ML), the most advanced artificial intelligence (AI), to develop an approach to automatically score student-drawn models and their written descriptions of those models.