Supporting Instructional Decision Making: The Potential of Automatically Scored Three-Dimensional Assessment System (Collaborative Research: Harris)

This project will study the utility of a machine learning-based assessment system for supporting middle school science teachers in making instructional decisions based on automatically generated student reports (AutoRs). The assessments target three-dimensional (3D) science learning by requiring students to integrate scientific practices, crosscutting concepts, and disciplinary core ideas to make sense of phenomena or solve complex problems.

Full Description
This project will study the utility of a machine learning-based assessment system for supporting middle school science teachers in making instructional decisions based on automatically generated student reports (AutoRs). The assessments target three-dimensional (3D) science learning by requiring students to integrate scientific practices, crosscutting concepts, and disciplinary core ideas to make sense of phenomena or solve complex problems. Led by collaborators from University of Georgia, Michigan State University, University of Illinois at Chicago, and WestEd, the project team will develop computer scoring algorithms, a suite of AutoRs, and an array of pedagogical content knowledge supports (PCKSs). These products will assist middle school science teachers in the use of 3D assessments, making informative instructional changes, and improve students’ 3D learning. The project will generate knowledge about teachers’ uses of 3D assessments and examine the potential of automatically scored 3D assessments.
 
The project will achieve the research goals using a mixed-methods design in three phases. Phase I: Develop AutoRs. Machine scoring models for the 3D assessment tasks will be developed using existing data. To support teachers’ interpretation and use of automatic scores, the project team will develop AutoRs and examine how teachers make use of these initial reports. Based on observations and feedback from teachers, AutoRs will be refined using an iterative procedure so that teachers can use them with more efficiency and productivity. Phase II: Develop and test PCKSs. Findings from Phase I, the literature, and interviews with experienced teachers will be employed to develop PCKSs. The project will provide professional learning with teachers on how to use the AutoRs and PCKSs. The project will research how teachers use AutoRs and PCKSs to make instructional decisions. The findings will be used to refine the PCKSs. Phase III: Classroom implementation. In this phase a study will be conducted with a new group of teachers to explore the effectiveness and usability of AutoRs and PCKSs in terms of supporting teachers’ instructional decisions and students’ 3D learning. This project will create knowledge about and formulate a theory of how teachers interpret and attend to students’ performance on 3D assessments, providing critical information on how to support teachers’ responsive instructional decision making. The collaborative team will widely disseminate various products, such as 3D assessment scoring algorithms, AutoRs, PCKSs, and the corresponding professional development programs, and publications to facilitate 3D instruction and learning.
 

Selected Publications:

  • Zhai, X., He, P., & Krajcik, J. (2022) Applying machine learning to automatically assess scientific models. Journal of Research in Science Teaching. 59 (10), 1765-1794. http://dx.doi.org/10.1002/tea.21773
  • He, P., Shin, N., Zhai, X., & Krajcik, J. (2023). Guiding Teacher Use of Artificial Intelligence- Based Knowledge-in-Use Assessment to Improve Instructional Decisions: A Conceptual Framework. Uses of Artificial Intelligence in STEM Education Zhai, X., & Krajcik, J. Under Review.
  • He, P., Zhai, X., Shin, N., & Krajcik, J. (2023). Using Rasch Measurement to Assess Knowledge-in-Use in Science Education. Advances in Applications of Rasch Measurement in Science Education Liu, X. & Boone, W. Springer Nature. Under Review.

Selected Presentations:

  • Zhai, X. (2022). AI Applications in Innovative Science Assessment Practices. Invited Talk. Faculty of Education, Beijing Normal University. 
  • Panjwani, S. & Zhai, X. (2022). AI for students with learning disabilities: A systematic review. International Conference of AI-based Assessment in STEM. Athens, Georgia.
  • Latif, E., Zhai, X. (2023). AI-Scorer: Principles for Designing Artificial Intelligence-Augmented Instructional Systems. Annual Meeting of the American Educational Research Association. Chicago. Under Review.
  • Latif, E., Zhai, X., Amerman, H., & He X. (2022). AI-Scorer: An AI-augmented Teacher Feedback Platform. International Conference of AI-based Assessment in STEM. Athens, Georgia. 
  • Zhai, X. (2022). AI-based Assessment. STEAM-Integrated Robotics Education Workshop. The University of Georgia & Daegu National Univers. 
  • Weiser, G. (2022). Actionable Instructional Decision Supports for Teachers Using AI-generated Score Reports. 2022 annual conference of the National Association of Research in Science Teaching. Vancouver, BC, Canada. 
  • Huang. R., Yin. Y., Zaidi, S.Z., Chen, Y., Strasser, M. (2023). Addressing challenges in formative assessment practices by artificial intelligence: a systematic review. 2023 Annual Meeting of the American Educational Research Association. Chicago IL. Under Review.
  • Amerman, H., & Zhai, X. (2023). Teacher Acceptance of Artificial Intelligence Technologies for Teaching and Learning: A Systematic Review. 2023 Annual Meeting of the National Association of Research in Science Teaching. Chicago IL. Accepted.
  • Zhai, X. (2022). Applying AI in Science Education. Invited Talk. Computer Science Department, University of Georgia. 
  • He, X., Chen, Y., Zhai, X., & Yin, Y. (2023). Automatically Generated Assessment Reports for Teachers' Formative Uses: A Review of Dashboard Design. Annual Meeting of the American Educational Research Association. Chicago. Under Review.
  • Shin, N., He, P., Nilsen, K., Amerman, H., Krajcik, J., & Zhai, X. (2023). Design Model for Pedagogical Content Knowledge Supports based on AI-Automated Scores. Annual Meeting of the American Educational Research Association. Chicago. Under Review.
  • He, P., Shin, N., Amerman, H., Zhai, X., & Krajcik, J. (2023). Designing and Applying Scoring Rubrics for Automatically-Scored Knowledge-in-Use Assessment Tasks for Instructional Decisions. Annual Meeting of the American Educational Research Association. Chicago. Under Review.
  • He, P., Shin, N., Zhai, X., & Krajcik, J. (2022). Guiding Teacher Use of Artificial Intelligence Based Knowledge-in-Use Assessment to Improve Instructional Decisions: A Conceptual Framework. International Conference of AI-based Assessment in STEM. Athens, Georgia. 
  • Zhai, X. (2021). Keynote: Machine Learning-based Next Generation Science Assessments. International Conference on Science Education and Technology. Indonesia.
  • Zhai, X. (2022). Keynote: Using AI to Advance Next-Generation Science Learning. 2nd Global Conference on International Education (GLOBE). Beijing Normal University. 
  • Zhai, Xiaoming (2022). Machine Learning Scoring Bias on Students that are Underrepresented in STEM. 2022 Annual Conference of the National Association of Research in Science Teaching. Vancouver, BC, Canada. 
  • Zhai, X. (2021). Machine Learning-based Next Generation Science Assessment Practices in the US. Invited Talk. Leibniz Universität Hannover, Germany. 
  • He, X., Chen, Y., Zhai, X., & Yin, Y. (2022). Reviewing Automatically Generated Assessment Reports for Teachers' Formative Uses. International Conference of AI-based Assessment in STEM. Athens, Georgia. 
  • Amerman, H., Zhai, X. Krajcik, J. (2022). Science Teachers' Perception of AI-based Learning Technologies in Classroom. Annual Conference of the National Association of Research in Science Teaching. Vancouver, BC, Canada. 
  • Zhai, X. (2023). Structural Poster Session. AI-Augmented Assessment Tools to Support Instruction in STEM. Annual Conference of the American Educational Research Association. Chicago. Under Review.
  • Zhai, X. (2022). Symposium: AI-based Innovative Assessments in Science. Annual Conference of the National Association of Research in Science Teaching. Vancouver, BC, Canada. 
  • Zhai, X., He, X., Latif, E., He., P., Krajcik, J., Yin, Y., Harris, C. (2023). Teacher Interpretation of AI-Augmented Assessment Reports. Annual Meeting of the American Educational ResearchAssociation. Chicago. Under Review.
  • He, P., Shin, N., Zhai, X., & Krajcik, J. (2023). Teacher Uses of Artificial Intelligence Based Classroom Assessment to Improve Instructional Decisions: A Conceptual Framework. Annual Meeting of the American Educational Research Association. Chicago. Under Review.
  • He, P., Shin, N., Nielsen, K.,  Amerman, H, & Krajcik, J. (2023). Developing three-dimensional instructional strategies based on student performance on classroom assessments. Annual Meeting of the National Association of Research in Science Teaching. Chicago. Accepted.
  • Zhai, X., He, X., Amerman, H., & Panjwani, S. (2023). Validity Issues of Teachers' Interpretation and Use of Automatic Scores. Annual Meeting of the American Educational Research Association. Chicago. Under Review.
  • Zhai, X. (2022). Validity of Machine Learning-based Next Generation Science Assessment. Invited Talk. College of Education, Zhejiang University. 
PROJECT KEYWORDS
Target Audience
Project Focus

Project Materials