Recently, several colleges and universities in the US have declared that applicants no longer need to submit their SAT or ACT scores to be considered for admission. Numerous schools have gone test optional; in fact, Bowdoin College has been test optional since 1969. However, when available, standardized test scores have been used almost uniformly in making admission decisions at most schools, with the SAT being used since 1926 and the ACT since 1959. Testing has come under intense scrutiny, and an ongoing discussion over its usefulness in college admissions has followed them to this day. The debate over the use of tests is really a debate over what criteria should be used for admissions. Ultimately, the objective and subjective criteria used for selecting students will determine the kinds of people who end up getting coveted spots in incoming classes.
Should standardized tests be used in admissions? And how should we determine whether to use them?
A recent study (pdf) by William Hiss and Valerie Franks looked at the success of students from 33 public and private universities. The authors noted, “Few significant differences between submitters and non-submitters of testing were observed in Cumulative GPAs and graduation rates, despite significant differences in SAT/ACT scores.” Thus, they concluded this supported the use of test optional policies.
In contrast, Howard Wainer conducted a study in 2011 examining 23 schools—including a detailed analysis of data from Bowdoin—and showed that first-year college grades for students who did not submit SAT scores were actually predicted with greater accuracy from their actual non-submitted SAT scores (retrieved through a special data gathering effort at the Educational Testing Service) than they were for those students who submitted scores. He noted that because students who do not submit tend to have lower scores—a finding corroborated by Hiss and Franks—this might allow universities to then submit higher averages to improve their standing in college rankings, such as US News & World Report. Wainer concluded: “If the goal of admissions policy is to admit students who are likely to do better in their college courses, students with higher SAT scores should be chosen over students with lower scores. Making the SAT optional seems to guarantee that it will be the lower-scoring students who withhold scores.”
The findings of these two studies—and really the conclusions about policy from the authors—appear to contradict one another. On the one hand, Hiss and Franks argue that based on negligible differences on college graduation rates and GPAs between submitters and non-submitters, the test optional policy is positive. But Wainer argues that the SAT is useful in predicting first-year grades and is an important tool in admissions.
So which argument is correct? Ultimately, it depends on the goal of the admissions policy.
It is reasonable to argue that institutions are free to decide what policies to implement when it comes to admissions as their goals for constructing an incoming class may vary. To the extent that tests, high school grades, community service, leadership, sports participation, essays, or other personal qualities are weighted differently in a college or university selection formula speaks directly to the qualities a school wants in its student body. But given the decades of evidence supporting the predictive validity of standardized tests such as the SAT and ACT, not making these tests a uniform standard across all applicants allows a wider variety of subjective and personal biases to come into play. Making something optional, after all, means that it is not possible for every candidate to be evaluated by the same criteria.
For example, Maria Laskaris, dean of admission of Dartmouth, told the New York Times: “With grade inflation, enormous variation in high school rigor and a surplus of excellent applicants, a test like the ACT or SAT is, for the moment, the only thing that is standard across all our applicants.” In fact, the Economist recently showed how grade inflation has increasingly become widespread in higher education in recent decades. Therefore, one limitation of research to date might be that the outcomes used in these studies may simply not have enough headroom—the idea that grade inflation, which is likely linked to inflated graduation rates, may mean the outcomes are simply not rare enough. If most people are getting high grades or graduating, then how can these outcomes be used to evaluate whether the SAT is a useful predictor? Based on these outcome limitations, I would propose a study design that would determine with more confidence whether or not test-optional policies are effective. Instead of limiting outcomes to college performance, conduct a longitudinal study to see what happens to graduates in terms of income, work performance, and other key educational and occupational benchmarks. If it turns out test score submitters and non-submitters end up having equivalent long-term outcomes, I think this would provide support for the idea that tests such as the SAT and ACT can be made optional.
In some of my research (pdf), along with colleagues Gregory Park (pdf), David Lubinski, and Camilla Benbow, we have shown that SAT scores from talented 13-year-olds end up predicting performance on multiple educational and occupational outcomes decades later such as earning doctorates, publications, university tenure, patents, and even income. This shows that test scores can be a useful tool not just for predicting performance in college, but well beyond it. There’s also a large body of evidence in support for standardized tests as documented by David Z. Hambrich and Christopher Chabris in Slate. Finally, it’s worth considering why many businesses are asking candidates for their SAT scores, as Shaila Dewan discussed in the New York Times. If the SAT is being used as a filtering mechanism for hiring after college, why is it being removed as a filtering mechanism before college?