Toward the end of WWII, one of the country’s major testing organizations was asked to design a tool for measuring and assessing the non-cognitive attributes most important for success, and it promised to deliver a reliable and valid measurement within six months. As it turns out, it was neither six months nor six years, but rather six decades in the making, but it is here now—and one of its formats is the Mission Skills Assessment (MSA) from the Index Group.
This was reported at the National Association of Independent Schools (NAIS) annual conference by Rich Roberts, Managing Principal Research Scientist for Educational Testing Service (ETS). As he explained, ETS has been at this work diligently for ten years, and is arriving at a position of strong confidence in their newly developing tools.
The Mission Skills Assessment is being developed by the Index Group, which is composed of 28 NAIS member K-8/9 schools of 400+ students. After a thorough review of what research tells is most important out in the world, as well as what their own missions state as being most important, they arrived at six essential “Mission Skills” in addition to the cognitive/academic/intellectual skills which our schools already assess.
Teamwork, Creativity, Ethics, Resilience, Curiosity, Time Management.
ETS researcher Roberts emphasized that we shouldn’t make the mistake of overemphasizing the value of non-cognitive skills. No, they are not two times as important as cognitive, a claim he hears bandied about in some places—but they are of equal importance.
The Index Group explains the purposes for the MSA on its website, and it is important to note that individual student evaluation or reporting is not the focal point here:
Participating schools will use the assessment to benchmark themselves against each other and to measure student improvement over the course of amiddle school education. Other goals include:
* Assess mission-related skills and performance character traits with reliability and validity.
* Correlate to other valued educational outcomes.
* Improve a school’s ability to teach these skills and traits; share ideas and resources.
* Be a leader in education reform and demonstrate the value our schools provide.
This is an exciting initiative, and for all of us who want to see our schools do an even better job than they already do in cultivating and advancing these skills, we should applaud. Now schools using this tool can compare their results to those of similar-schools, and upon identifying gaps in performance, can identify practices in high-performing schools for their own program development. But as much as this tool is primarily aimed at institutional program improvement, it also provides valuable food for thought for admission professionals.
Admission officers seeking to round out their evaluations of applicants to include non-cognitive character assessments in addition to GPA and test scores are all too familiar with the skepticism they encounter from some constituents. This skepticism is less likely to focus on the value of these attributes and more likely to cast doubt on whether they can be effectively assessed.
The Index Group’s MSA study provides a significant step forward in answering this skepticism with research-based evidence, and suggests a practical and evidence-based strategy for conducting similar or parallel type assessment. According to Roberts, who recently edited a book published by Oxford University Press entitledNew Perspectives in Faking in Personality Assessment, the strategy which is demonstrably effective isTriangulation: the use of three separate measurements combined to offset the flaws inherent in each.
Student self-evaluation and reporting is a fine tool, up to a point, and is used in the Mission Skills Assessment, but only for a third of the total quantification. As an example, students complete a survey of their teamwork skills by reporting frequency of their actions on statements such as “I am a good team member,” and “I work effectively with others.”
The second leg of this three-legged stool is teacher evaluation of student skills. The MSA has developed these online evaluations to carefully match their aims with statements fairly closely matching the student self-evaluations.
The third leg is less tightly defined, in that there are various options for the different core attributes. It is not so important exactly what the third tool is, as much as the fact that there is a carefully designed third tool. Ethics, resilience, and teamwork might be measured by the Situational Judgment Tests (SJTs), which place students in a scenario and ask them to select the best course of action; creativity could be measured through one of the many common such performance tests generating ideas or examples; and time-management assessed by using so-called biographical data surveys, such as reporting how many times you were late in the last month. (Note that this is a middle school student tool, and Roberts reports research that says middle school students have a high rate of honesty on such things, in contrast to high school students, among whom honesty rates plummet).
Triangulation works, Roberts reports, in both reliability and validity. Reliability, or whether results are consistent across multiple administrations of the tool, was found to run in a range of 0.85 to 0.91 across the six constructs, as compared to 0.91 for the SAT Math section. Validity, or whether the tool indeed assesses the precise traits it claims to, is evaluated by examining the predictions you choose to make of the findings. In this case, validity was demonstrated by how well it predicted teacher-rated student quality and student-reported well-being. It predicted better than standardized test scores and GPA , and it predicted GPA just a little bit less well than standardized tests. Creativity is correlated with student quality at a rate of .56, and time management with GPA in the 7th grade at .59. Much more detail is available in the Index Group’s 72 slides, which are available here.
The Index Group MSA is not currently widely available, and has not been designed with the intention of admission use. But in the next few years, the tool itself, and parallel tools, are likely (I might speculate certainly) to become a much more significant part of the assessment landscape, and inevitably strategies will emerge to link these tools and techniques to the work of admission assessment.
Reports from the Field
Also at NAIS, the admission team at Connecticut’s Salisbury School presented the research they are doing in mapping student success at their school to admission criteria, in a session entitled “The Science Behind the Art of Admissions.” As they explained, this is an effort to bring the analytical skills of Moneyball’sSABRmetrics to student selection, and great use was made of clips from that terrific film (and I’d add that Michael Lewis’ book Moneyball is a must read).
Working in part with SSATB and its “effective enrollment management” services, including the Optimal Use Study, they considered a wide array of critical questions for admission effectiveness, including:
- Whom does your school serve well?
- What are the characteristics of successful students?
- How do you quantify the potential success of an applicant?
- What are the correlations & expectancies for student performance at the school?
Much emerged from their analysis, including a valuable deeper understanding of which SSAT test areas have the strongest relationship to GPA in both the first and second years at their school. But in addition to the importance of standardized testing for such predictions, they also determined certain non-cognitive qualities they particularly valued in their students, such as grit, optimism, and benevolence.
In the past, these types of qualities might have occasionally been brought up when considering applicants, but with Moneyball in mind, the Salisbury team sought to strengthen and make more consistent the role of these criteria in their process by quantifying them. Accordingly, they are now carefully evaluating each of these qualities in every applicant during their admission interviews, and a point system has been established to ensure these “softer” attributes are factored right alongside test scores and GPA. Each candidate can now earn between one and three points for each of three categories: grit/optimism, benevolence, and reasons for choosing Salisbury.
Salisbury Admission Director Peter Gilbert told me of the careful research undertaken to build out these interviews by the team there, which also includes Tim Randall and Brian Phinney. For evaluating grit, they drew upon the research of Angela Duckworth and the grit scale she developed (which I shared in a previous blog post), along with some of the interview questions taken straight from that scale. Optimism scales, and a larger argument for the essential importance of optimism to perseverance and accomplishment in a wide array of domains, can be found in the writings of Duckworth’s mentor at the University of Pennsylvania, Marty Seligman, such as his excellent book The Optimistic Child.