Standardized tests measure many things—math, science, reading—and, crucially, how hard students actually try.
In 2009, the US failed to rank in the top 20 for most subjects tested in the OECD’s Program of International Student Assessment (PISA), a standardized test taken by 15-year-olds in over 70 countries that is widely used to steer education policies. “I know skeptics will want to argue with the results, but we consider them to be accurate and reliable,” said Arne Duncan at the time, Obama’s first US Secretary of Education. “We can quibble, or we can face the brutal truth that we’re being out-educated.”
That same year, students in Shanghai, China, posted the world’s best scores in math, reading, and writing. (In fairness, the 5,100 Shanghai teens selected to represent the city of 24 million are probably not representative of China as a whole.)
For policymakers in countries with poor PISA scores, new research suggests that the problem may be less about students’ abilities, and more about their motivation. That’s the potential conclusion from a study by economists from the OECD and several US universities, who ran a set of experiments to see how motivation, fatigue, and attitudes toward hard work influence standardized test scores.
After randomizing the order of questions given to students in 12 countries, including the US, the researchers found that a student, on average, was 13.7 percentage points less likely to answer a question correctly if it appeared near the end of a test instead of at the beginning. The gap was smaller for students from strong PISA performers, like Singapore and Hong Kong, and larger for lower-ranked countries, like the US and Spain.
How students feel about hard work might be a factor. Young people from countries like Finland, who were more likely to believe hard work leads to a better life, had an easier time maintaining focus at the end of a test. In places like Russia, where students said that luck and connections are bigger factors in success, students showed less endurance during testing.
Could this be fixed? The researchers administered a 25-question math test at schools in Shanghai and the US comprised of previous PISA questions. Students were randomly divided into two groups: in the first, students received $25 at the start of the test, losing a dollar for every question they got wrong. The other group was not paid.
The scores of Shanghai students didn’t shift much in response to financial incentives. But the American students who were paid answered at least one more question correctly, on average. The average honor student did even better, boosting her score by two points.
As a result, the researchers think a $25 incentive scheme could meaningfully change the US’s PISA rankings. They predict that if the US had used financial incentives during the 2012 PISA test, the country’s math ranking would have risen to 19th, from 36th. (The effects, though, would be smaller if all countries paid their students.)
Does this mean that the high-profile PISA test is flawed? Researchers don’t think so. A team led by Gema Zamarro, an education professor at the University of Arkansas, posit that declining effort over the course of a test might reflect skills as important as math and reading, like conscientiousness and self-control. These, as much as measures of academic performance, could play an important role in whether students complete high school, go to college, and achieve professional success.