Over the past four years, we have scored over 1.8 million student responses using a methodology that we patented for analyzing composite items holistically using Open Source software.

We are working on a proof of concept that the CRASE+ scoring engine can mimic that work, perhaps with greater quality weighted kappa to human coders than the open source method. We ran a test case on five items and will extend the analysis to a wider variety of the 53 items in the existing pool.

The ultimate goal would be to use this methodology for immediate formative feedback to help teachers locate students along a learning progression with a specific indicator code that corresponds to specific errors, or misconceptions, that could then be targeted with instructional activities that address those patterns of student understandings.