Myths About the Programme for International Student Assessment (PISA)
By Jonathan E. Martin, Principal, JonathanEMartin Ed. Services
From The Yield, Spring 2014
Myth #1: PISA is an international ranking competition, like the Olympics of high school academics, solely to establish the best and worst nations. Fact: It is true, when the results are released the year following administration (December 2013 saw the publication of the 2012 testing), the headline writers render them as a horse race, reporting on which country is in front (Shanghai, Singapore, Korea, or Finland?), which is catching up (Poland!), and which is slipping behind.
However, the OECD would not have published literally dozens of monographs and reports over the past decade if it were simply a horse race. Instead, the Programme for International Student Assessment (PISA) provides the data by which policy makers can “learn from policies and practices applied elsewhere, and set policy targets against measureable goals achieved by other educational systems.”
Increasingly, the independent school community speaks of educating our students for global competency; PISA is the single best tool available for educators to become informed and sophisticated about global best pedagogical practices.
Myth #2: PISA is just another standardized test. Fact: The team at the OECD knew a comparison of national performance could not be based solely on content knowledge because of the wide variations in curricula. To obtain a more valid comparison, the OECD checked students’ critical “literacies” by having students demonstrate their knowledge and thinking skills in novel situations.
As Amanda Ripley explains in The Smartest Kids in the World, “Tests usually quantified students’ preparedness for more schooling, not their preparedness for life. The promise of PISA [is] that it would reveal which countries were teaching kids to think for themselves.”
Myth #3: U.S. students do just fine after taking socioeconomics into account. Fact: U.S. students significantly underperform students in many other nations. The U.S. mean score in math is 481, below the OECD mean of 494 and below far poorer nations such as Russia, Vietnam, Italy, and Spain. The U.S. students’ scores were closer to the OECD mean in reading (one point over) and in science (four points under).
What is most important for independent school admission directors to understand is that this underperformance is also deeply problematic at the high end of student proficiency. The U.S. is not doing well enough to prepare its best students. Only 8% of U.S. students score in the top two levels of mathematical performance, as compared to 55% in Shanghai, 24% in Japan, and 16% in Canada and Poland. The U.S. must do better to educate its top two quintiles, and this is where independent schools can make a great impact.
Myth #4: PISA solely measures students’ cognitive abilities. Fact: PISA assesses the significance of a wide array of non-cognitive attributes and correlates them with success on the cognitive sections. Attributes including perseverance, openness to problem-solving, locus of control, and motivation — both intrinsic (they enjoy the subject) and instrumental (they perceive the learning as valuable to their future career) — are studied in order to assess student beliefs about themselves and their learning environments.
For example, for locus of control, which parallels fairly closely with what Stanford’s Carol Dweck calls the growth mindset, students self-evaluate their belief in statements such as, “If I put in enough effort, I can succeed,” and, “Whether or not I do well is completely up to me.” Students with stronger beliefs do considerably better on the PISA test.
It can be striking to learn that the impact of locus of control is highest among the best-performing students. One might expect this mindset would shift the needle among students struggling or stuck in the middle of the pack, but the data show it can make the most difference (gain of 36 compared to 32) in the top bracket (90th percentile) of students. This surprising but highly significant pattern is consistent across all of the student beliefs measured, indicating that better attitudes make an even greater positive impact at the higher ends of performance.
Selecting students who self-report holding these beliefs, or who demonstrate a capacity and willingness to develop them, would be a way to strengthen a student body’s academic future, especially when done in conjunction with educational program initiatives to develop and advance these student beliefs.