Q&A with Jack Buckley on June SAT Printing Error
Q&A with Jack Buckley on June SAT Printing ErrorCollege Board Communications
The College Board and Educational Testing Service (ETS) announced earlier this month that the sections of the SAT® administered on June 6 that were affected by a printing error will not be scored, and that we are still able to report valid and reliable scores. Today, students are receiving those scores.
In this interview with Jack Buckley, College Board's senior vice president for research, All Access delves deeper into the research that informed this decision and explores the “how and why” behind our ability to provide reliable scores for students.
What exactly happened with the June 6 SAT forms?
A printing error in students’ SAT test books incorrectly indicated that they had 25 minutes to complete the last reading section, while the test center supervisors’ manual indicated the correct time of 20 minutes. So, some students thought they had five extra minutes to complete the section than they actually did.
Because of the way the SAT is administered, students taking the last math section in the same room at the same time would also be affected, even though the misprint only appeared in the last reading section.
Together with ETS, we conducted an extensive analysis to decide how to resolve the issue in a way that was respectful to students and families and psychometrically sound. Our analysis found that not scoring the two impacted sections would still allow us to produce scores that are sufficiently reliable for use by colleges and other organizations.
Can you describe how you conducted the analysis?
Our analysis focused on three key considerations:
- Whether the scores would be sufficiently reliable when not scoring two sections.
- Whether we could equate the test forms if we did not score two sections.
- Whether the remaining sections include the same distribution of content and skills that would allow students to demonstrate their ability.
What does “reliable scores” really mean?
“Reliability” is the tendency of scores to be consistent on different testing occasions when the student’s ability is unchanged. Since tests like the SAT are well-understood statistically, we were able to model the effect of a shorter test (i.e., a test missing a section) on reliability.
We found that the reliability of the shortened June 6 SAT was comparable to that of the full-length test.
So, why not just shorten the exam for good?
A good test design necessitates building in a certain level of redundancy to provide reliable results even if things don’t go as planned. For example, if a test item or set of items doesn’t perform as expected, it can be omitted and the test can still provide reliable results. This doesn’t occur often, but the SAT is built to withstand it nonetheless.
What about the other two considerations? How did you determine whether the shortened test could be equated, and whether it included the right distribution of content and skills?
“Equating” is a statistical procedure that is conducted to ensure that different versions of a test are of comparable difficulty. The SAT is equated using anchor sections — sections that have appeared on previous forms and are used to put the new form “on scale.” During the June 2015 administration, the anchor sections were not affected by the printing error, and therefore neither was the equating process.
While reliability and equating are essential, it is equally important to ensure that the content of the shortened test adequately represents the intended domain of knowledge and skills. We compared the content and item parameters (item difficulty and the ability to distinguish between test-takers based on their level of knowledge) in the correctly timed sections to the overall test specifications.
Since each SAT section is designed to reflect the overall distribution, we found that this shortened test is highly aligned to the overall SAT content specifications.
How can students know that their scores reflect those they would have received if all sections had been scored?
By examining reliability, fairness, and the coverage of content and skills, we determined that the June 6 SAT had very similar properties to the typical SAT. As such, students’ scores are a valid and reliable representation of their knowledge, abilities, and skills across the content domains assessed by the SAT.
Will colleges still accept the scores from June 6?
We have talked with college admission directors from across the country, and they have expressed full confidence in the scores and will use them as they would any other SAT score in their admission decisions.
If you have any questions, please contact College Board customer service at email@example.com.