A design study was conducted to test a machine learning (ML)-enabled automated feedback system developed to support students’ revision of scientific arguments using data from published sources and simulations. This paper focuses on three simulation-based scientific argumentation tasks called Trap, Aquifer, and Supply. These tasks were part of an online science curriculum module addressing groundwater systems for secondary school students. ML was used to develop automated scoring models for students’ argumentation texts as well as to explore emerging patterns between students’ simulation interactions and argumentation scores. The study occurred as we were developing the first version of simulation feedback to augment the existing argument feedback. We studied two cohorts of students who used argument only (AO) feedback (n = 164) versus argument and simulation (AS) feedback (n = 179). We investigated how AO and AS students interacted with simulations and wrote and revised their scientific arguments before and after receiving their respective feedback. Overall, the same percentages of students (49% each) revised their arguments after feedback, and their revised arguments received significantly higher scores for both feedback conditions, p < 0.001. Significantly greater numbers of AS students (36% across three tasks) reran the simulations after feedback as compared with the AO students (5%), p < 0.001. For AS students who reran the simulation, their simulation scores increased for the Trap task, p < .001, and for the Aquifer task, p < 0.01. AO students who did not receive simulation feedback but reran the simulations increased simulation scores only for the Trap task, p < .05. For the Trap and Aquifer tasks, students who increased simulation scores were more likely to increase argument scores in their revisions than those who did not increase simulation scores or did not revisit simulations at all after simulation feedback was provided. This pattern was not found for the Supply task. Based on these findings, we discuss strengths and weaknesses of the current automated feedback design, in particular the use of ML.
Lee, H., Gweon, G., Lord, T., Paessel, N., Pallant, A., & Pryputniewicz, S. (2021). Machine learning-enabled automated feedback: Supporting students’ revision of scientific arguments based on data drawn from simulation. Journal of Science Education and Technology.