Assessment

Empowering Teachers to See and Support Student Use of Crosscutting Concepts in the Life Sciences

The project focuses on the development of formative assessment tools that highlight assets of students’ use of crosscutting concepts (CCCs) while engaged in science and engineering practices in grades 9-12 Life Sciences.

Lead Organization(s): 
Award Number: 
2100822
Funding Period: 
Sun, 08/01/2021 to Wed, 07/31/2024
Full Description: 

The project focuses on the development of formative assessment tools that highlight assets of students’ use of crosscutting concepts (CCCs) while engaged in science and engineering practices in grades 9-12 Life Sciences. In response to the calls set forth by the Framework for K-12 Science Education and Next Generation Science Standards (NGSS), the field has most successfully researched and developed assessment tools for disciplinary core ideas and the science and engineering practices. The CCCs, which serve as the connective links across science domains, however, remain more abstractly addressed. Presently, science educators have little guidance for what student use of CCCs looks like or how to assess and nurture such use. This project, with its explicit attention to the CCCs, advances true three-dimensional scientific understanding in both research and the classroom. Leveraging formative assessment as a vehicle for student and teacher development taps into proven successful instructional strategies (e.g., sharing visions of successful learning, descriptive feedback, self-assessment), while also advancing formative assessment, itself, by strengthening and illustrating how these strategies may focus on the CCCs. Further, a strengths-based approach will center culturally related differences in students’ use of CCCs to achieve more equitable opportunities to engage in classroom sensemaking practices. This work impacts the field of science education by 1) enabling a more thorough realization of NGSS ideals, 2) strengthening teachers’ abilities to identify diverse demonstrations of CCCs, and 3) showcasing the impact of novel classroom tools to sharpen teachers’ abilities to solicit, notice, and act upon evidence of emergent student scientific thinking within their instructional practices.

This design-based implementation research project will engage teachers in the iterative development and refinement of rubrics that support three-dimensional science understanding through formative assessment. The high school biology classrooms that compose the study site are engaged in ambitious science teaching-inspired instruction. An inductive, bottom-up approach (Brookhart, 2013) will allow researchers, teachers, and students to co-construct rubrics. Analysis of classroom observations, artifact collection, interviews with teachers and students, and expert-panel ratings will produce a rubric for each CCC that integrates relevant science and engineering practices and is applicable across a range of disciplinary core ideas. These rubrics will illustrate progressions of increasingly advanced use of each of the CCCs, to guide the construction, pursuit, and assessment of learning goals. There will be two design cycles that allow for the collection of validity evidence and produce rubrics with the potential for broad application by educators. Complementary lines of qualitative and quantitative (i.e., psychometric) analysis will contribute to development and validation of the rubrics and their formative uses. Project inquiry will focus on 1) how the rubrics can represent CCCs for key disciplinary practices, 2) the extent to which teachers’ and students’ understandings of the rubrics align, and 3) how implementation of the rubrics impacts teachers’ and students’ understandings of the CCCs.

Supporting Instructional Decision Making: The Potential of Automatically Scored Three-Dimensional Assessment System (Collaborative Research: Krajcik)

This project will study the utility of a machine learning-based assessment system for supporting middle school science teachers in making instructional decisions based on automatically generated student reports (AutoRs). The assessments target three-dimensional (3D) science learning by requiring students to integrate scientific practices, crosscutting concepts, and disciplinary core ideas to make sense of phenomena or solve complex problems.

Lead Organization(s): 
Award Number: 
2100964
Funding Period: 
Wed, 09/01/2021 to Sun, 08/31/2025
Full Description: 
This project will study the utility of a machine learning-based assessment system for supporting middle school science teachers in making instructional decisions based on automatically generated student reports (AutoRs). The assessments target three-dimensional (3D) science learning by requiring students to integrate scientific practices, crosscutting concepts, and disciplinary core ideas to make sense of phenomena or solve complex problems. Led by collaborators from University of Georgia, Michigan State University, University of Illinois at Chicago, and WestEd, the project team will develop computer scoring algorithms, a suite of AutoRs, and an array of pedagogical content knowledge supports (PCKSs). These products will assist middle school science teachers in the use of 3D assessments, making informative instructional changes, and improve students’ 3D learning. The project will generate knowledge about teachers’ uses of 3D assessments and examine the potential of automatically scored 3D assessments.
 
The project will achieve the research goals using a mixed-methods design in three phases. Phase I: Develop AutoRs. Machine scoring models for the 3D assessment tasks will be developed using existing data. To support teachers’ interpretation and use of automatic scores, the project team will develop AutoRs and examine how teachers make use of these initial reports. Based on observations and feedback from teachers, AutoRs will be refined using an iterative procedure so that teachers can use them with more efficiency and productivity. Phase II: Develop and test PCKSs. Findings from Phase I, the literature, and interviews with experienced teachers will be employed to develop PCKSs. The project will provide professional learning with teachers on how to use the AutoRs and PCKSs. The project will research how teachers use AutoRs and PCKSs to make instructional decisions. The findings will be used to refine the PCKSs. Phase III: Classroom implementation. In this phase a study will be conducted with a new group of teachers to explore the effectiveness and usability of AutoRs and PCKSs in terms of supporting teachers’ instructional decisions and students’ 3D learning. This project will create knowledge about and formulate a theory of how teachers interpret and attend to students’ performance on 3D assessments, providing critical information on how to support teachers’ responsive instructional decision making. The collaborative team will widely disseminate various products, such as 3D assessment scoring algorithms, AutoRs, PCKSs, and the corresponding professional development programs, and publications to facilitate 3D instruction and learning.

Supporting Instructional Decision Making: The Potential of Automatically Scored Three-Dimensional Assessment System (Collaborative Research: Zhai)

This project will study the utility of a machine learning-based assessment system for supporting middle school science teachers in making instructional decisions based on automatically generated student reports (AutoRs). The assessments target three-dimensional (3D) science learning by requiring students to integrate scientific practices, crosscutting concepts, and disciplinary core ideas to make sense of phenomena or solve complex problems.

Lead Organization(s): 
Award Number: 
2101104
Funding Period: 
Wed, 09/01/2021 to Sun, 08/31/2025
Full Description: 
This project will study the utility of a machine learning-based assessment system for supporting middle school science teachers in making instructional decisions based on automatically generated student reports (AutoRs). The assessments target three-dimensional (3D) science learning by requiring students to integrate scientific practices, crosscutting concepts, and disciplinary core ideas to make sense of phenomena or solve complex problems. Led by collaborators from University of Georgia, Michigan State University, University of Illinois at Chicago, and WestEd, the project team will develop computer scoring algorithms, a suite of AutoRs, and an array of pedagogical content knowledge supports (PCKSs). These products will assist middle school science teachers in the use of 3D assessments, making informative instructional changes, and improve students’ 3D learning. The project will generate knowledge about teachers’ uses of 3D assessments and examine the potential of automatically scored 3D assessments.
 
The project will achieve the research goals using a mixed-methods design in three phases. Phase I: Develop AutoRs. Machine scoring models for the 3D assessment tasks will be developed using existing data. To support teachers’ interpretation and use of automatic scores, the project team will develop AutoRs and examine how teachers make use of these initial reports. Based on observations and feedback from teachers, AutoRs will be refined using an iterative procedure so that teachers can use them with more efficiency and productivity. Phase II: Develop and test PCKSs. Findings from Phase I, the literature, and interviews with experienced teachers will be employed to develop PCKSs. The project will provide professional learning with teachers on how to use the AutoRs and PCKSs. The project will research how teachers use AutoRs and PCKSs to make instructional decisions. The findings will be used to refine the PCKSs. Phase III: Classroom implementation. In this phase a study will be conducted with a new group of teachers to explore the effectiveness and usability of AutoRs and PCKSs in terms of supporting teachers’ instructional decisions and students’ 3D learning. This project will create knowledge about and formulate a theory of how teachers interpret and attend to students’ performance on 3D assessments, providing critical information on how to support teachers’ responsive instructional decision making. The collaborative team will widely disseminate various products, such as 3D assessment scoring algorithms, AutoRs, PCKSs, and the corresponding professional development programs, and publications to facilitate 3D instruction and learning.

Supporting Instructional Decision Making: The Potential of Automatically Scored Three-Dimensional Assessment System (Collaborative Research: Weiser)

This project will study the utility of a machine learning-based assessment system for supporting middle school science teachers in making instructional decisions based on automatically generated student reports (AutoRs). The assessments target three-dimensional (3D) science learning by requiring students to integrate scientific practices, crosscutting concepts, and disciplinary core ideas to make sense of phenomena or solve complex problems.

Lead Organization(s): 
Award Number: 
2101112
Funding Period: 
Wed, 09/01/2021 to Sun, 08/31/2025
Full Description: 
This project will study the utility of a machine learning-based assessment system for supporting middle school science teachers in making instructional decisions based on automatically generated student reports (AutoRs). The assessments target three-dimensional (3D) science learning by requiring students to integrate scientific practices, crosscutting concepts, and disciplinary core ideas to make sense of phenomena or solve complex problems. Led by collaborators from University of Georgia, Michigan State University, University of Illinois at Chicago, and WestEd, the project team will develop computer scoring algorithms, a suite of AutoRs, and an array of pedagogical content knowledge supports (PCKSs). These products will assist middle school science teachers in the use of 3D assessments, making informative instructional changes, and improve students’ 3D learning. The project will generate knowledge about teachers’ uses of 3D assessments and examine the potential of automatically scored 3D assessments.
 
The project will achieve the research goals using a mixed-methods design in three phases. Phase I: Develop AutoRs. Machine scoring models for the 3D assessment tasks will be developed using existing data. To support teachers’ interpretation and use of automatic scores, the project team will develop AutoRs and examine how teachers make use of these initial reports. Based on observations and feedback from teachers, AutoRs will be refined using an iterative procedure so that teachers can use them with more efficiency and productivity. Phase II: Develop and test PCKSs. Findings from Phase I, the literature, and interviews with experienced teachers will be employed to develop PCKSs. The project will provide professional learning with teachers on how to use the AutoRs and PCKSs. The project will research how teachers use AutoRs and PCKSs to make instructional decisions. The findings will be used to refine the PCKSs. Phase III: Classroom implementation. In this phase a study will be conducted with a new group of teachers to explore the effectiveness and usability of AutoRs and PCKSs in terms of supporting teachers’ instructional decisions and students’ 3D learning. This project will create knowledge about and formulate a theory of how teachers interpret and attend to students’ performance on 3D assessments, providing critical information on how to support teachers’ responsive instructional decision making. The collaborative team will widely disseminate various products, such as 3D assessment scoring algorithms, AutoRs, PCKSs, and the corresponding professional development programs, and publications to facilitate 3D instruction and learning.

Supporting Instructional Decision Making: The Potential of Automatically Scored Three-Dimensional Assessment System (Collaborative Research: Yin)

This project will study the utility of a machine learning-based assessment system for supporting middle school science teachers in making instructional decisions based on automatically generated student reports (AutoRs). The assessments target three-dimensional (3D) science learning by requiring students to integrate scientific practices, crosscutting concepts, and disciplinary core ideas to make sense of phenomena or solve complex problems.

Award Number: 
2101166
Funding Period: 
Wed, 09/01/2021 to Sun, 08/31/2025
Full Description: 
This project will study the utility of a machine learning-based assessment system for supporting middle school science teachers in making instructional decisions based on automatically generated student reports (AutoRs). The assessments target three-dimensional (3D) science learning by requiring students to integrate scientific practices, crosscutting concepts, and disciplinary core ideas to make sense of phenomena or solve complex problems. Led by collaborators from University of Georgia, Michigan State University, University of Illinois at Chicago, and WestEd, the project team will develop computer scoring algorithms, a suite of AutoRs, and an array of pedagogical content knowledge supports (PCKSs). These products will assist middle school science teachers in the use of 3D assessments, making informative instructional changes, and improve students’ 3D learning. The project will generate knowledge about teachers’ uses of 3D assessments and examine the potential of automatically scored 3D assessments.
 
The project will achieve the research goals using a mixed-methods design in three phases. Phase I: Develop AutoRs. Machine scoring models for the 3D assessment tasks will be developed using existing data. To support teachers’ interpretation and use of automatic scores, the project team will develop AutoRs and examine how teachers make use of these initial reports. Based on observations and feedback from teachers, AutoRs will be refined using an iterative procedure so that teachers can use them with more efficiency and productivity. Phase II: Develop and test PCKSs. Findings from Phase I, the literature, and interviews with experienced teachers will be employed to develop PCKSs. The project will provide professional learning with teachers on how to use the AutoRs and PCKSs. The project will research how teachers use AutoRs and PCKSs to make instructional decisions. The findings will be used to refine the PCKSs. Phase III: Classroom implementation. In this phase a study will be conducted with a new group of teachers to explore the effectiveness and usability of AutoRs and PCKSs in terms of supporting teachers’ instructional decisions and students’ 3D learning. This project will create knowledge about and formulate a theory of how teachers interpret and attend to students’ performance on 3D assessments, providing critical information on how to support teachers’ responsive instructional decision making. The collaborative team will widely disseminate various products, such as 3D assessment scoring algorithms, AutoRs, PCKSs, and the corresponding professional development programs, and publications to facilitate 3D instruction and learning.

Crowd-Sourced Online Nexus for Developing Assessments of Middle-School Physical Science Disciplinary Core Ideas

This project will develop and test a web-based platform to increase the quality of teacher-administered tests in science classrooms. It draws on classroom teacher knowledge while employing the rigorous statistical methods used in standardized assessment creation and validation. The content focus is on the disciplinary core ideas for grades 6-8 physical science in the Next Generation Science Standards (NGSS).

Lead Organization(s): 
Award Number: 
2101493
Funding Period: 
Wed, 09/01/2021 to Sat, 08/31/2024
Full Description: 

This project will develop and test a web-based platform to increase the quality of teacher-administered tests in science classrooms. It draws on classroom teacher knowledge while employing the rigorous statistical methods used in standardized assessment creation and validation. The content focus is on the disciplinary core ideas for grades 6-8 physical science in the Next Generation Science Standards (NGSS). Teachers now spend an estimate 20% of their time in assessment, yet have relatively few tools to draw upon when creating them. Over time, they learn to adapt items from available curriculum materials and textbooks. On the other hand, standardized assessment developers have the benefit of expert item writers, long development cycles, a large and diverse student population, and sophisticated psychometric tools. This project combines these two approaches, drawing upon teachers to contribute their best items, then immediately piloting them using crowdsourced subjects. Psychometric analysis generates measures of item quality and then “recycles” items to participating teachers for improvement. In this way, a large test item bank will be constructed utilizing teacher input with each item possessing: appropriate reading levels, NGSS alignment, scientific accuracy, appropriate difficulty, high statistical discrimination, and negligible difference by gender, race, or ethnicity. Involvement in this project has potential benefits for teachers lacking formal training in assessment, familiarizing participants with the NGSS, and with the elements of high-quality test development.

The project will gauge the merits of a novel collaborative system for the development and validation of high-quality test items and assessment instruments. It will measure the degree to which teachers can generate effective items and improve existing items exhibiting problematic issues when given the guidance of rigorous psychometric measures that estimate item quality. It will build on earlier research showing that an adult, crowd-sourced sample works well as an initial proxy for grade 6-8 science students, allowing for extremely rapid feedback on item quality (often overnight), with item response theory computation used to establish item difficulty, item discrimination, guessing levels, and differential item functioning (gender and racial/ethnicity bias). In addition, computed measures of misconception strength, scientific correctness, reading level, and match to the NGSS will help to guide revision by teachers. Use of Bayesian futility analysis will “triage” items, minimizing costly testing of items when deemed unlikely to meet item quality criteria, lowering costs. Field testing with a large sample of grade 6-8 students will provide a final check on item quality. Items will be developed much more inexpensively than by methods used for standardized test development. Two pairs (public-release and secure for chemistry and physics) of assessment instruments will be constructed and be freely available to science teachers for classroom use and by education researchers and curriculum developers. A system that provides quick feedback on item quality could potentially transform university instruction and professional development opportunities in assessment. While starting with selected response (multiple-choice) items, the project will be able to implement a larger variety of formats in the future, incorporating automated approaches as they become available.

Supporting Teachers to Teach Mathematics through Problem Posing

This project aims to support teachers to engage their students in mathematical problem posing (problem-posing-based learning, or P-PBL). P-PBL is a powerful approach to the teaching and learning of mathematics, and provides students with opportunities to engage in authentic mathematical practices.

Lead Organization(s): 
Award Number: 
2101552
Funding Period: 
Sun, 08/01/2021 to Thu, 07/31/2025
Full Description: 

This project aims to support teachers to engage their students in mathematical problem posing (problem-posing-based learning, or P-PBL). P-PBL is a powerful approach to the teaching and learning of mathematics, and provides students with opportunities to engage in authentic mathematical practices. For example, conjecturing in mathematics, a form of problem posing, often plays an important role in solving complex problems, and problem posing is an important component of mathematical modeling. Yet despite its importance, widely used curriculum materials fail to incorporate P-PBL in substantial and consistent ways, leaving teachers with few resources to enact this process. This project will develop problem-posing lessons and illustrative cases of teachers implementing P-PBL that will not only support teachers to develop a vision of what P-PBL looks like and how to implement it in their own classrooms, but will also serve as rich resources for professional development (PD) providers. This project will generate valuable findings about teaching using problem posing for district administrators, mathematics teachers, educators, and researchers as well as curriculum developers and policy makers. The team will develop and pilot a set of 20−30 research-based P-PBL cases that provide critical details for the implementation of P-PBL and reveal “lessons learned” from the development process.

The project promises broader impact on the field of mathematics education as the first goal is to support teachers to teach mathematics through engaging their students in mathematical problem posing. By guiding students to construct and investigate their own problems, P-PBL both helps to create mathematical learning opportunities and develops students’ mathematical agency and positive mathematical identities. A networked improvement community of teachers and researchers will integrate problem posing into daily mathematics instruction and continuously improve the quality of P-PBL through iterative task and lesson design. The intellectual merit of this project is its contribution of new and important insights about teaching mathematics through problem posing. This will be realized through the second project goal which is to longitudinally investigate the promise of supporting teachers to teach with P-PBL for enhancing teachers’ instructional practice and students’ learning. A quasi-experimental design coupled with design-based research methodology and improvement science will be used to understand how, when, and why P-PBL works in practice. Specifically, we plan to follow a sample of 36 teachers and their approximately 3,600 students from six middle schools for multiple years to longitudinally explore the promise of P-PBL for developing teachers’ beliefs about problem posing, their beliefs about P-PBL, and their actual instructional practice. We will also investigate students’ learning as measured by problem-posing performance, problem-solving performance, and mathematics disposition. The findings of the project will add not only to the field’s understanding of the promise of supporting teachers to integrate P-PBL into their mathematics instruction, but also to its understanding of the challenges that teachers face when engaging in a networked improvement community that is focused on improving tasks and lessons by integrating P-PBL.

Anchoring High School Students in Real-Life Issues that Integrate STEM Content and Literacy

Through the integration of STEM content and literacy, this project will study the ways teachers implement project practices integrating literacy activities into STEM learning. Teachers will facilitate instruction using scenarios that present students with everyday, STEM-related issues, presented as scenarios, that they read and write about. After reading and engaging with math and science content, students will write a source-based argument in which they state a claim, support the claim with evidence from the texts, and explain the multiple perspectives on the issue.

Lead Organization(s): 
Award Number: 
2010312
Funding Period: 
Sat, 08/15/2020 to Sun, 07/31/2022
Full Description: 

The STEM Literacy Project sets out to support student learning through developing teacher expertise in collaborative integration of STEM in student writing and literacy skills development. Facilitated by teachers, students will read, discuss, and then write about real-world STEM scenarios, such as water quality or health. The project will build on and research a professional development program first developed through a state-supported literacy program for middle and high school science and math teachers to improve literacy-integrated instruction. The goals of this project include the following: (1) Create a community of practice that recognizes high school teachers as content experts; (2) Implement high quality professional development for teachers on STEM/Literacy integration; (3) Develop assessments based on STEM and literacy standards that inform instruction; and (4) Conduct rigorous research to understand the impact of the professional development. The program is aligned with state and national standards for college and career readiness. Project resources will be widely shared through a regularly updated project website (stemliteracyproject.org), conference presentations, and publications reaching researchers, developers, and educators. These resources will include scenario-based assessment tools and instructional materials.

Through the integration of STEM content and literacy, the project will study the ways teachers implement project practices integrating literacy activities into STEM learning. Teachers will facilitate instruction using scenarios that present students with everyday, STEM-related issues, presented as scenarios, that they read and write about. After reading and engaging with math and science content, students will write a source-based argument in which they state a claim, support the claim with evidence from the texts, and explain the multiple perspectives on the issue. These scenarios provide students with agency as they craft an argument for an audience, such as presenting to a city council, a school board, or another group of stakeholders. Project research will use a mixed methods design. Based on the work completed through the initial designs and development of scenario-based assessments, rubrics, and scoring processes, the project will study the impact on instruction and student learning. Using a triangulation design convergence model, findings will be compared and contrasted in order for the data to inform one another and lead to further interpretation of the data. project will analyze the features of STEM content learning after program-related instruction. Data collected will include pre-post student scenario-based writing; pre-post interviews of up to 40 students each year; pre-post teacher interviews; and teacher-created scenario-based assessments and supporting instructional materials. Student learning reflected in the assessments paired with student and teacher interview responses will provide a deeper understanding of this approach of integrating STEM and literacy. The use of discourse analysis methods will allow growth in content learning to be measured through language use. Project research will build knowledge in the field concerning how participation in teacher professional development integrating STEM content in literacy practices impacts teacher practices and student learning.

Online Practice Suite: Practice Spaces, Simulations and Virtual Reality Environments for Preservice Teachers to Learn to Facilitate Argumentation Discussions in Math and Science

This project will develop, pilot, and refine a set of coordinated and complementary activities that teacher education programs can use in both online and face-to-face settings to provide practice-based opportunities for preservice teachers to develop their ability to facilitate argumentation-focused discussions in mathematics and science.

Lead Organization(s): 
Award Number: 
2037983
Funding Period: 
Sat, 08/15/2020 to Mon, 07/31/2023
Full Description: 

In teacher education it is widely acknowledged that learning to teach requires that preservice teachers have robust, authentic, and consistent opportunities to engage in the work of teaching—ideally across different contexts, with diverse student populations, and for varied purposes—as they hone their instructional practice. Practice teaching experiences in K-12 classrooms, such as field placements and student teaching, are the most widely used approaches to provide these opportunities. In an ideal world these experiences are opportunities for preservice teachers to observe and work closely with mentor teachers and try out new instructional strategies with individual, small groups, and whole classes of K-12 students. While these experiences are critical to supporting preservice teachers' learning, it can be difficult to help preservice teachers transition from university classrooms to field placements in ways that provide them with opportunities to enact ambitious instructional strategies. This need is particularly acute in mathematics and science education, where classrooms that model strong disciplinary discourse and argumentation are not always prevalent. This challenge is amplified by the COVID-19 pandemic environment; with schools and universities across the nation operating online, many preservice teachers will miss out on opportunities to practice teaching both within their courses and in K-12 classrooms. To address this urgent challenge in STEM education, project researchers will develop, pilot, and refine a set of coordinated and complementary activities that teacher education programs can use in both online and face-to-face settings to provide practice-based opportunities for preservice teachers to develop their ability to facilitate argumentation-focused discussions in mathematics and science, a critical teaching practice in these content areas. The practice-based activities include: (1) interactive, online digital games that create targeted practice spaces to engage preservice teachers to respond to students' content-focused ideas and interactions; (2) facilitating group discussions with upper elementary or middle school student avatars in a simulated classroom using performance-based tasks; and (3) an immersive virtual reality whole-classroom environment that allows for verbal, textual and non-verbal interactions between a teacher avatar and 24 student avatars. The online practice suite, made up of these activities along with supports to help teacher educators use them effectively, represents not just an immediate remedy to the challenge of COVID-19, but a rich and flexible set of resources with the potential to support and improve teacher preparation well beyond the COVID-19 challenge.

This study will use design-based research to create this integrated system of practice teaching opportunities. This approach will involve developing and refining the individual practice activities, the integrated online practice suite, and the teacher educator support materials by working with a teacher educator community of practice and engaging up to 20 teacher educators and 400 preservice teachers in multiple rounds of tryouts and piloting during the three-year project. The project will proceed in three phases: a first phase of small-scale testing, a second phase trying the materials with teacher educators affiliated with the project team, and a third phase piloting materials with a broader group of mathematics and science teacher educators. Data sources include surveys of preservice teachers' background characteristics, perceptions of the practice activities, beliefs about content instruction, perceptions about preparedness to teach, and understanding of argumentation and discussion, videos and/or log files of their performances for each practice teaching activity, and scores on their practice teaching performances. The project team will also observe the in-class instructional activities prior to and after the use of each practice teaching activity, conduct interviews with teacher educators, and collect instructional logs from the teacher educators and instructional artifacts used to support preservice teachers' learning. Data analysis will include pre and post comparisons to examine evidence of growth in preservice math and science teachers' beliefs, perceptions, understanding, and teaching performance. The project team will also build a series of analytic memos to describe how each teacher educator used the online practice suite within the mathematics or science methods course and the factors and decisions that went into that each use case. Then, they will describe and understand how the various uses and adaptations may be linked to contextual factors within these diverse settings. Findings will be used to produce empirically and theoretically grounded design principles and heuristics for these types of practice-based activities to support teacher learning.

Assessing College-Ready Computational Thinking (Collaborative Research: Wilson)

The goal of this project is to develop learning progressions and assessment items targeting computational thinking. The items will be used for a test of college-ready critical reasoning skills and will be integrated into an existing online assessment system, the Berkeley Assessment System Software.

Award Number: 
2010314
Funding Period: 
Tue, 09/01/2020 to Sat, 08/31/2024
Full Description: 

Because of the growing need for students to be college and career ready, high-quality assessments of college readiness skills are in high demand. To realize the goal of preparing students for college and careers, assessments must measure important competencies and provide rapid feedback to teachers. It is necessary to go beyond the limits of multiple-choice testing and foster the skills and thinking that lie at the core of college and career ready skills, such as computational thinking. Computational thinking is a set of valuable skills that can be used to solve problems, design systems, and understand human behavior, and is thus essential to developing a more STEM-literate public. Computational thinking is increasingly seen as a fundamental analytical skill that everyone, not just computer scientists, can use. The goal of this project is to develop learning progressions and assessment items targeting computational thinking. The items will be used for a test of college-ready critical reasoning skills and will be integrated into an existing online assessment system, the Berkeley Assessment System Software.

The project will address a set of research questions focused on 1) clarifying computational thinking constructs, 2) usability, reliability of validity of assessment items and the information they provide, 3) teachers' use of assessments, and 4) relationships to student performance. The study sample of 2,700 used for the pilot and field tests will include all levels of students in 10th through 12th grade and first year college students (both community college and university level). The target population is students in schools which are implementing the College Readiness Program (CRP) of the National Mathematics and Science Institute. In the 2020-21 academic year 54 high schools across 11 states (CA, GA, FL, ID, LA, NC, NM, OH, TX, VA, and WA) will participate. This will include high school students in Advanced Placement classes as well as non-Advanced Placement classes.  The team will use the BEAR Assessment System to develop and refine assessment materials. This system is an integrated approach to developing assessments that seeks to provide meaningful interpretations of student work relative to cognitive and developmental goals. The researchers will gather empirical evidence to develop and improve the assessment materials, and then gather reliability and validity evidence to support their use. In total, item response data will be collected from several thousand students. Student response data will be analyzed using multidimensional item response theory models.

Pages

Subscribe to Assessment