Decision Models for Placement of Students

Based on State Test Scores in Grades 4 and 8

Gerald E. DeMauro
Coordinator
Office of State Assessment

December, 1999

Abstract

Considerations are examined for means to interpret statewide assessment results. This paper is intended to balance the expressed desire of local administrators for increasingly finer divisions of populations and item groupings with the requirements for reliability in a standards-based assessment system. Considerations are presented for decision models that systematically proceed from the highest levels of reliability to the lowest levels. In this way, the most reliable data forms the foundation for decisions while the least reliable data is examined in view of other sources of information.Decision Models for Placement of Students

Based on State Test Scores in Grades 4 and 8

Overview

The standard setting studies to determine proficiency levels in grades 4 and 8 in English Language Arts and in Mathematics and on New York State Regents Examinations in Comprehensive English and Mathematics A used a procedure called item mapping. This procedure requires student scores and test item difficulty to be scaled together. On that common scale, students with scores with higher scale values than an item difficulty value have a greater than 50 percent chance of answering that item correctly. Students with scores with lower scale values than an item difficulty value have a lower than 50 percent chance of answering that item correctly.

Through a deliberative process of expert judgment, items are classified as representing no achievement, partial, or full achievement of the standards. Because the items are on the same scale as student scores, this deliberative classification of test items results in the scale scores that demarcate the proficiency levels of the examinations.

This process rests heavily on the definitions used to describe achievement of the Learning Standards. The experts who participated in standard setting for the fourth and eighth grade tests were advised that Level 1 students demonstrated no achievement of the Learning Standards, while Level 2 students demonstrated either some achievement of each Learning Standard or full achievement of some but not all of the Learning Standards.

Essentially, then, Level 1 students have not demonstrated success with respect to meeting the Learning Standards. This recommends a placement that is qualitatively different from the current instructional program. Level 2 students, on the other hand, have quantitative deficits that might be addressed by quantitatively different programs, e.g., those that differentially stress identified weaknesses.

Multiple Measures in a Hierarchical Model

The statewide tests can never reveal more about the skills and deficits of students than the local teachers and administrators. However, the statewide tests do provide a single uniform measure across all school districts and classrooms and provide broadly diagnostic information that is most reliable when it is sampled over larger numbers of students and includes larger numbers of test questions.

Step 1: Self Study. Each school and district should carefully review the opportunity it provides for students to acquire the Learning Standards. In particular, attention should be given to:

    1. When each aspect of the standard or key idea, including each performance indicator is taught during the child's academic career;
    2. How much instructional time is devoted to each standard or key idea;
    3. How the acquisition of the standard or key idea is evaluated;
    4. How feedback is provided to the child on the evaluation;
    5. What the consequences of different levels of performance on local assessments are for the child in terms of implementing the standards;
    6. How the instructional program varies from building to building, from classroom to classroom.

The self study is a necessary first step in interpreting the results of statewide assessment. Without critical attention to the instructional program before statewide assessment, there is insufficient local capacity to interpret or to respond to the results. Staffing and resource issues are a major component of any self reflection.

Step 2: Largest to Smallest Group Analysis. The analysis of results should then proceed from the largest aggregation to the smallest. Two dimensions must be considered.

    1. The numbers of students in a group;
    2. The numbers of test questions on which decisions are made.

Too often, with the best intentions of deriving all possible information from test results, local administrators can make unreliable or wrong decisions about children because a data base that is too small in terms of numbers of students or items can mislead them.

For example, one item may show that the students have not mastered a certain concept, e.g., main idea of a passage. In fact, that item might have a very high difficulty statewide and might be difficult for all children. Similarly, students who did poorly on a certain item might fare better on the test as a whole simply because that particular item was not as good a discriminator as other items tapping the same concept.

Programs might also be redesigned in error because of the performance of a few students. Even though the test is designed to minimize extraneous factors, when low numbers of students are involved, the influence of such factors is much greater, and programs should be cautious about making large-scale changes on the basis of little evidence.

Student performance analyses should proceed from the largest aggregation of students to the smallest:

    1. Statewide descriptive statistics such as means, frequencies, and standard deviations;
    2. Public or nonpublic descriptive statistics;
    3. Program-wide descriptive statistics, e.g., statewide special or general education results;
    4. Regional or needs resource-level descriptive statistics for the population of interest;
    5. District-level descriptive statistics for the population of interest;
    6. Building-level descriptive statistics for the population of interest;
    7. Classroom-level descriptive statistics for the population of interest;
    8. Student-level performance for the population of interest.

Each step should be evaluated in terms of observed discrepancies with the higher steps. Program, district, building and classroom level information must be cognizant of the self study results.

The second dimension to be considered is the level of test aggregation. This progresses differently for different subject areas, but in general should be considered from the largest aggregation (most reliable) to the smallest:

    1. Whole test, scale score or raw score;
    2. Reading Scale (where available);
    3. Standard Performance Indicator (where available);
    4. Content component of Learning Standard;
    5. Major concepts, e.g., main idea;
    6. Item.

The item-by-child level analysis is by far the most useless and deceptive, while the whole test-by-whole state analysis is clearly the most reliable. By proceeding from the most reliable to the least reliable data, local administrators can decide on how much credibility the data have.

Step 3: Multiple Measures. Faced with possibly unreliable data, greater reliability may be garnered through reference to additional information about students that is related to the same knowledge or skills domain. As a consequence of the self study, the following sources of related data should serve as multiple measures to the extent that they measure the same construct;

    1. Results of other standardized examinations in the same year and on the same construct;
    2. Course grades in the appropriate subject in the same year;
    3. Other course grades that reflect on skills needed to respond to test questions;
    4. Local assessment results in the appropriate subject in the same year;
    5. History of standardized examination performance on the same construct.

Because there are scale differences in each of these measures, analyses of multiple measures may be facilitated as follows:

    1. Identify a period of time. e.g., two years, over which the performance of every child in the statewide test cohort can be tracked;
    2. Rank all of the students in the cohort from highest (1) to lowest (n, where n is the number of students in the cohort) based on their performance on that measure;
    3. Note the range of ranks within each performance level on the statewide test;
    4. Rank the cohort on the Standards Performance Indicators, as well;
    5. Look for large discrepancies, e.g. large changes in ranks, from one measure to the next, or for each individual measure compared to the student's average rank over all measures. For example, if among 40 students in the cohort, the student ranks 12th on the overall English Language Arts scale but 8th on the Reading scale, the students may have greater difficulty in Writing, than in Reading.

Particular attention should be given to variables in which the rank of a student is outside of the range of ranks of the performance level of the student on the statewide examination. If the student scores Level 2 on the Grade 4 English Language Arts examination, and there are 42 students in fourth grade in this district, the range of ranks for Level 2 students may be 7 to 15. Particular attention should be given to any variables in which any Level 2 child ranks from 1 to 6 or from 16 to 42.

The most persuasive evidence for student placement should be data from the most reliable source, on measures most focused on the State Learning Standards, and on data collected nearest the date of the administration of the statewide test. The self study should be very useful in explaining discrepancies in the ranked data. Particular attention should be given to differences in ranks between statewide test data aggregated to the level of the Learning Standards, such as the Standard Performance Indicators, and the overall scale scores on the statewide assessment. This will identify areas of particular strengths and weaknesses.

Intervention

Any student below a passing scale score (local or state) on the Regents examinations or below Level 3 on the statewide Grade 4 and Grade 8 English Language Arts or Mathematics examinations should receive some form of instructional intervention. As mentioned earlier, for students nearest the criterion score, the intervention is more quantitatively different in nature, requiring more instructional time for certain deficits. For students in Level 1 of the grade 4 or grade 8 English Language Arts or Mathematics examinations a qualitatively different instructional program in which the intervention may require entirely different strategies for instruction, may better suit the student's needs.

The decision about the required degree of special intervention is a matter of local discretion, in accordance with Part 100 of the Commissioner's Regulations. Multiple measures should both identify the intensity of the needed intervention and indicate the types of feedback mechanisms that should be in place to decide when the special instructional intervention has been successful and which future intervention or placement is most appropriate. For example, if the ranks analyses indicate that students in Level 2 of the Grade 4 English Language Arts examination have a history of strength in writing and a relative weakness on the reading scale, then feedback mechanisms should be built into the intervention to discern progress in reading and maintenance of skill levels in writing.

Using Strengths to Address Weaknesses

The multiple measures analyses should identify strengths as well as weaknesses. In the second example above, writing can be used to improve reading skills when a program is designed where students can edit their work, or when students are asked to write from their interpretation of reading passages. It is beyond the scope of this paper to suggest intervention models, but the multiple measures and assessment models lend themselves readily to effective intervention.

Profile Analyses

A more complex analysis is available by reviewing each student's performance on parts of the examination in terms of standardized differences from mean scores of certain populations. One way of accomplishing this is to compute the mean and standard deviation on each Learning Standard or key idea for the populations who scored exactly at the cutoff scores defining each performance level.

The computation of differences divided by the standard deviation standardizes these differences, thus making them comparable for the purposes of identifying areas of strengths and weaknesses. For example, although all of the Standard Performance Indicators (SPI) range from 1 to 100, in actuality, the scores of the students of the State may be bunched within a certain small range (small standard deviation) for one standard. Hence, scoring below that range may indicate a particular deficit for a student since the other students in the state or class who were exposed to the standard were more likely to score higher. The same student's SPI score on a standard in which the scores are more evenly scattered (larger standard deviation) may indicate less of a problem for a particular student but somewhat more of a problem for a group of students.

Table 1 shows the means and standard deviations for the Grade 4 and Grade 8 English Language Arts and Mathematics examinations on the SPI.

Table 1

Standards Performance Indicators Means and Standard Deviations 
for Students Scoring Exactly at the Cutoff Scores on
Statewide Examinations in Grades 4 and 8

   

Level 2

Level 3

Level 4

 

Examination

Standards Performance Indicator

 

Mean

Standard Deviation

 

Mean

Standard Deviation

 

Mean

Standard Deviation

Math-4

1

2

3

4

5

6

7

37.21

46.57

32.92

50.59

48.92

36.55

62.02

5.95

3.73

3.52

4.63

4.76

7.10

6.25

60.46

64.02

55.29

64.12

70.00

59.13

80.18

6.09

3.46

3.78

4.45

3.50

5.92

5.89

83.50

85.14

83.06

78.28

84.91

80.67

91.72

4.90

2.64

3.23

4.13

2.31

4.69

5.25

Math-8

1

2

3

4

5

6

7

30.95

56.61

29.50

52.39

48.75

57.18

28.42

7.16

7.24

5.44

8.63

6.33

9.41

4.26

52.57

81.77

51.39

77.11

70.44

79.83

47.46

7.16

6.53

6.66

6.85

5.27

8.97

5.19

77.20

95.17

83.98

92.59

88.69

91.89

77.36

6.76

4.58

6.00

4.36

5.11

9.70

5.09

ELA-4

1

2

3

57.50

46.61

32.29

3.45

1.94

2.98

83.09

68.70

56.07

3.03

2.53

4.39

96.77

89.24

83.67

1.39

1.16

1.61

ELA-8

1

2

3

44.59

62.33

44.71

2.16

5.48

2.51

73.29

85.57

72.04

1.53

5.12

2.05

90.46

95.24

87.96

1.08

3.53

2.46

Table 2 presents some fictitious data as an example of student level analysis. The standardization (difference from the mean divided by the standard deviation) places all of the comparisons on the same scale. This enables an analysis of how far each student is from the profile of a just minimally achieving Level 2, Level 3, or Level 4 student.

The data in Table 2 shows that this student is relatively strongest is Key Idea 2 (+0.86) and weakest in Key Idea 3 (-1.13).

Resource Needs Comparisons

Table 3 presents the means and standard deviations for the resource need categories on the 1999 Grade 4 and Grade 8 Mathematics and English Language Arts assessments. In an absolute sense, these data are not very informative, but they do provide a general description of how students statewide in the same resource need category fared on the four examinations. Again, standardized differences may be computed as shown in Table 2, using the mean and standard deviation in Table 3 to aid the profile analyses.

Table 2

Analysis on Fictitious Student Data of Relative Strengths and 
Weaknesses for Grade 4 Mathematics

Standards Performance Indicator

Profile of Minimum Level-3 Student

Level 2 Student's Scores

Standardized Differences*

1

2

3

4

5

6

7

Mean

60.46

64.02

55.29

64.12

70.00

59.13

80.18

Std.

6.09

3.46

3.78

4.45

3.50

5.92

5.89

58

67

51

64

71

64

77

-0.40

+0.86

-1.13

-0.03

+0.29

+0.82

-0.54

*(Student-Mean)/standard deviation

Table 3

Means and Standard Deviations of Standards Performance Indicators
on 1999 Grade 4 and Grade 8 English Language Arts and Mathematics Examinations

Examination

Key Idea or Stand.

 

NYC

Big 4

High/US

High/Rural

Avg.

Low

Math-4

1

Mean

Std.

58.14

22.50

58.48

20.13

64.39

19.67

69.00

17.75

72.89

17.11

79.97

14.51

 

2

Mean

Std.

63.13

19.42

63.17

17.02

68.24

16.76

72.02

15.18

75.49

14.86

81.99

13.04

 

3

Mean

Std.

54.90

22.80

55.16

20.60

61.46

20.66

66.08

19.22

70.46

18.08

78.26

16.32

 

4

Mean

Std.

62.83

15.28

63.09

13.30

66.65

12.99

69.30

11.63

71.89

11.59

76.65

10.52

 

5

Mean

Std.

65.18

19.76

65.80

17.35

70.73

16.07

74.60

13.38

77.36

12.60

82.25

10.15

 

6

Mean

Std.

56.96

21.14

57.11

19.10

62.86

18.75

67.08

17.02

70.84

16.48

77.57

14.22

 

7

Mean

Std.

75.40

18.34

76.22

15.92

80.31

14.49

83.20

12.30

85.54

11.47

89.58

9.12

Math-8

1

Mean

Std.

36.65

21.21

34.68

18.22

39.71

20.32

45.94

19.43

50.31

20.12

57.93

19.98

 

2

Mean

Std.

58.94

24.32

57.44

21.81

63.14

23.23

70.57

20.95

74.75

20.15

81.39

17.72

 

3

Mean

Std.

36.26

21.90

33.69

18.42

39.47

21.16

45.34

20.82

50.75

22.20

59.27

22.57

 

4

Mean

Std.

52.91

25.79

52.67

23.04

58.97

24.13

66.88

21.09

71.13

20.45

77.58

18.03

 

5

Mean

Std.

50.93

22.64

49.54

19.83

54.96

21.19

61.64

19.12

65.74

18.94

72.33

17.54

 

6

Mean

Std.

57.81

24.81

57.12

22.39

62.72

23.08

69.94

20.41

73.63

19.68

79.65

17.45

 

7

Mean

Std.

34.61

19.88

32.24

16.51

37.15

19.09

42.75

18.57

47.21

19.98

55.26

20.71

Table 3 (continued)

Examination

Key Idea or Stand.

 

NYC

Big 4

High/US

High/Rural

Avg.

Low

ELA-4

1

Mean

Std.

71.70

17.97

71.84

15.95

76.43

14.99

79.95

13.63

83.05

12.56

86.55

10.52

 

2

Mean

Std.

59.58

17.57

59.60

15.13

63.75

14.75

67.52

13.81

70.69

13.26

74.52

12.02

 

3

Mean

Std.

47.78

19.02

46.94

16.79

51.45

16.76

55.32

16.37

59.37

16.17

64.35

15.16

ELA-8

1

Mean

Std.

63.55

18.67

61.91

16.77

65.42

16.67

69.26

15.40

72.81

14.82

77.28

13.39

 

2

Mean

Std.

76.80

16.14

75.98

14.60

78.75

14.01

81.75

12.43

84.16

11.65

87.20

10.15

 

3

Mean

Std.

62.66

18.88

61.22

16.75

64.49

16.44

68.31

14.99

71.65

14.30

75.93

12.78

Conclusion

The focus of this paper is on maximizing the information available from statewide assessments while containing the assault on reliability. Several models are presented for working down from the most reliable sources of data in order to make some program inferences from the least reliable sources.

More information will be provided in future papers to continue to provide a sound base for decisions for those responsible for educational programs.