While character education is a hallmark of an independent school education and is a salient piece of most schools’ missions, gaining insight into an applying student’s current level of character skill development has been largely a matter of intuition and an investigative screening of the application. While many schools assess character in some way (e.g., via student interviews, teacher recommendations), the industry’s reliance on unstandardized or inconsistent forms of character skills measurement highlights the need for a standardized and empirically supported approach.
The Enrollment Management Association’s (EMA) newest member offering, The Character Skills Snapshot, is revolutionary in its ability to provide a picture of a child’s character skill development at a single point in time, providing admission offices with a clear view of how a child sees him- or herself, as well as those areas in which a school’s pedagogy can both assist in the evolution of a child’s emerging skills and build on current strengths.
Based on recommendations from the Think Tank on the Future of Assessment over the last several years, EMA felt strongly that member schools should be closely involved in the development of a new character assessment for admission. A character assessment working group (“G32Plus”), composed of member schools from a variety of school types and locations, was formed to advise and develop the new assessment. The group’s first task was to help EMA and the Educational Testing Service (ETS) settle on which character skills to measure.
Pretesting and Skill Identification
A first group summit provided participating members the opportunity to identify, discuss, and ultimately select the initial set of 12 character skills that were going to be assessed in the item (question) pretest. The pretest was then conducted with over 1,400 students, collecting responses to Likert-type items that were written to measure the 12 skills. A Likert-type item presents a statement (e.g., “I work hard”) and allows respondents to select an option on a scale (e.g., strongly disagree to strongly agree). The results of this pretest allowed the group to hone in on seven skills the items were actually measuring. A forced-choice assessment was then developed to measure the new set of seven character skills, namely: initiative, intellectual curiosity, open-mindedness, resilience, responsibility, self-control, and teamwork.
A final skill was deemed by the committee to be important for inclusion in The Snapshot: social awareness. With guidance from EMA and ETS, 35 teachers from member schools constructed situational judgment questions that are now included in The Snapshot. Situational judgment items present a scenario that usually describes a potential point of conflict between two or more people (e.g., a group of students, a student and a teacher, or a student and a parent). Each of these scenarios is associated with four possible ways of reacting to the situation, and the respondent is expected to evaluate the appropriateness of each of these reactions. These items were designed specifically to measure social awareness.
Pilot Testing and Validity
Two large pilot tests of the forced-choice and situational judgment items were conducted. The first was called the Beta Test and was administered to over 6,000 students. It provided an opportunity to evaluate the construct validity of all potential forced-choice and situational judgment items. Results from this pilot allowed EMA to identify sets of situational judgment items for use in assessment forms and provided important information about the validity of the forced-choice assessment. The preliminary validity evidence from this pilot confirmed that the forced-choice assessment does measure the intended constructs, and this measurement is consistent for various subgroups by gender, ethnicity, English language status, and grade level.
The second pilot was the Field Trial, which was administered to more than 4,000 students. This was the formal test of the assessment form(s) that would be included in this first operational year of The Snapshot. Based on evidence from this pilot, the situational judgment forms that were created using data from the previous pilot demonstrated stable measurement—that is, the forms are measuring the same things and are functionally the same even though the items are different—as intended. Results also revealed impressive consistency in scores (i.e., score reliability) on the forced-choice assessment over one year (from beta test to field trial) for students who took the assessment both times.