GRADE 4 ELA -GENERAL QUESTIONS & ANSWERS

Q: Why is it necessary for teachers to go through training every year before we score the studentsí responses? Many of us have already been trained to score the ELA test in previous years.

A: Annual training is important. Of course, teachers who have never scored the ELA test obviously need training on how to apply the rubric and its criteria to individual responses. However, since the form differs from year to year, even experienced scorers also need new training each year. Although the generic rubric and its criteria remain constant, the manner in which the rubric and its criteria are applied differs with each yearís form due to variations in the texts and the tasks of that particular form. For that reason, each year scorers must be trained with new materials. The scoring guides provide rubrics specific to the passages and questions, and actual responses, written by New York students and scored by New York educators, are provided to demonstrate how the criteria should be applied for each yearís ELA test.

Q: Whatís the difference between "blank" and "absent" in the condition codes?

A: When there is no written response to a cluster, a condition code is assigned to reflect that fact. If the student booklet indicates that the student was absent, the code for "absent" should be used. If there is no indication that the student was absent, the condition code for "blank" should be used.

Q: How do I use the videotapes?

A: Videotapes have been provided for each component of the Grade 8 English Language Arts test to assist in training scoring leaders/scorers. The trainer in each videotape will discuss the contents of the Scoring Guide and the Practice Set for that content area. The Scoring Guide will be presented first to demonstrate how the scoring rubric should be applied to student responses. We suggest that Training Leaders stop or pause the videotape before the videotaped trainer begins discussion of the Practice Set. This provides an opportunity for those being trained to read their Practice Sets and practice making scoring decisions.

We also suggest that scorers practice on only one or two student responses at a time, stopping and reviewing the correct score(s) before moving on to the next. The Scoring Leader may read and discuss the annotations and marginalia in their copies of the Practice Sets, or may resume the videotape at appropriate intervals. Several short practice segments followed by review maximizes the opportunity to learn by doing and assists in building scorer skill and confidence.

Q: As a Scoring Leader, how should I prepare to train Table Facilitators and Scorers?

A: Training procedures and the logistics of live scoring are covered in the Scoring Leader Handbook, which should be read thoroughly before training. You should also review your Scoring Guide and Practice Set while viewing the videotape.

 Q: How should a response be scored if itís entirely blank, or it says the student refuses to answer, or itís written in another language?

A: A list of Condition Codes can be found near the back of the Scoring Guide, and the Scoring Leader Handbook contains the procedures for assigning such codes. Responses written in another language should always be scored as an "E" even if the Scoring Leader or another scorer understands the other language, since the test is intended to assess English communication skills.

Q: Sometimes a student will respond to some but not all items. What Overall score should such responses receive?

A: Near the back of the Scoring Guide is a list of Scoring Considerations. These outline the effect of missing responses on the Overall score.

Q: Suppose a student leaves the short responses blank and answers only the extended response, but in the extended response clearly demonstrates understanding of all of the questions posed in the other items? Since the Overall score is supposed to holistically reflect the understanding of the student, can such a response receive a "6"?

A: No. If only the extended response is answered, the Scoring Considerations limit the Overall score to a "2."

Q: When training is over, should scorers refer to the training materials while scoring actual student responses?

A: YES! To maintain accuracy and consistency in scoring, it is very helpful to refer occasionally to the student responses used in the training materials as examples of the various score points. These responses are often called "anchor papers" because they help to fix the acceptable range within a score point and prevent the scorer from "drifting" higher or lower in their expectations for awarding a score point.

Q: I understand that holistic scoring involves weighing and balancing various factors. What are these factors, and what weight should be given to each?

A: The scoring rubric addresses the factors that should be considered in determining the score of a response by listing characteristics that tend to occur among the score points. These characteristics reflect the degree to which focus, development, organization, and writing style are found within a response. Focus is how well the response fulfills the requirements of the task and the connections to the task found within the response. Development is how much information is presented: the details, the specifics, and the amount of elaboration on ideas. Organization is the order in which the information is presented. Does one idea logically follow another? If itís a narrative, how tight is the sequence of events? Writing style generally concerns word choice and sentence patterns. How fluent is the response? Is it easy to read? Writing style should not be confused with writing mechanics. Style concerns what word is used, whereas mechanics concerns how the word is spelled. Style looks at how the sentence patterns create a flow of ideas, while mechanics looks at how the sentences are punctuated. Remember that writing mechanics is scored separately and should not be a factor in scoring Independent Writing. In assigning a score to an Independent Writing response, all relevant factors should be assessed. However, the most important factor by far, and the one accorded the most weight, is DEVELOPMENT. The amount of development is central to each score point. How much information are we being given? What are the details and the specifics? Are ideas or events elaborated and expanded upon? Development is not only important in and of itself; it also impacts the other factors. There must be a certain amount of information presented for a scorer to be able to assess a responseís focus, organization, and fluency. Caution: development is not synonymous with length! Obviously, the process of presenting the amount of information necessary to get to a higher score point will result in longer responses. Handwriting size obviously makes a difference, but other considerations also come into play. Repetition will make a response appear longer, but does not add to the quantity and quality of the elaboration. Word choice can also affect development. Specific and/or vivid words pack more information into less space.

Q: Our scorers are experienced teachers who adhere to certain standards in their classrooms. Some scorers may find it difficult to follow the standards set by the rubric and the training materials if those standards seem higher or lower than those used by the scorer in the classroom. How should I advise a scorer who hesitates to apply the standards appropriate for this test?

A: We value the classroom experience of our scorers, and we realize that some variation of expectations may exist between districts, schools, and individuals. However, it is very important that all scorers separate their classroom expectations from the standards used in scoring this statewide test. Every scorer should use the same standards in applying the rubric to student responses. Uniform standards in scoring are crucial to obtaining the consistency and accuracy necessary for a valid assessment of student performances across the entire state. Accurate assessments ultimately benefit everyone.

Q: How can a scorer avoid "drifting" from the correct standards while scoring?

A: After scoring a number of responses, a scorer may gradually, even unconsciously, begin to accept more or less than is appropriate in awarding a particular score point. This could result in scoring inequity, where a student response could receive a different score from the same person depending on when it was scored. To maintain the consistency and accuracy of all scores, it is important to prevent any "drift" in scoring standards. This is best accomplished by frequent reference to the "anchor papers" in the training materials, and by encouraging scorers to consult their Table Facilitators or Scoring Leaders with responses that seem on the line between two score points.

Q: Some students with diasbilities have extemely large handwriting and wonít be able to fit their answers to open ended questions onto the empty spaces in Book 2. Are they permitted to use extra paper?

A: If the IEP/504 plan makes this test accomodation, students may use additional paper or write their answers on a word processor and attach the printed response. Notation must be made on the front of the booklet that student had this accomodation (there is a section in the School Administrators Manual that addresses this).

Q: What if I should encounter a response where the student indicates that he or she is in a crisis situation and needs intervention? How should such sensitive responses be handled?

A: Scorers should be instructed to bring such responses to the immediate attention of the Scoring Leader. Scoring Leaders need to notify the school principal of any sensitive responses. If tests are being scored regionally, Scoring Leaders should alert the Site Coordinator who will contact the studentís principal.

Q: What if a student puts the correct information for a response on a different page, such as the planning page, instead of on the correct response page?

A: If the response page is blank, it must be scored to reflect that it is blank. However, if a student indicates graphically on the correct response page that a response is written or continued onto another page, then the scorer can follow the studentís instructions and consider the information on the indicated page.

Q: The rubric says a "6" will have "vivid language and a sense of engagement or voice." Where in each of the "6"s in the training materials can I find examples of vivid language and voice?

A: Not all "6"s will have vivid language or a sense of engagement. However, the precision of language and the manner of expression can be factors in strengthening a response if all of the other elements are present. Voice, where the personality of the student shows itself in the manner of expression, is like a cherry on the sundae. The sundae must be there first before the cherry can be seen as adding anything substantial. Keep in mind also that what is vivid language or voice for an 8th grade student may be different from what you or I may consider to be vivid.

Q: On borderline calls, when deciding between adjacent score points, should the scorer always give the "benefit of the doubt" to the student and award the higher score?

A: No. Such a practice can result in scoring "drift." After scoring a number of responses, a scorer may gradually, even unconsciously, begin to accept less (or demand more) than is appropriate in awarding a particular score point. Scoring "drift" can create an unfair situation where a student response could receive a different score from the same scorer depending on when the response was scored. To prevent "drift" and maintain the consistency and accuracy of all scores, it is helpful to refer occasionally to the student responses used in the training materials as examples of the various score points. These responses are often called "anchor papers" because they help to fix the acceptable range within a score point and prevent the scorer from "drifting" higher or lower in their expectations for awarding a score point. Scorers should also be encouraged to consult their Table Facilitators and Scoring Leaders with responses that seem on the line between two score points.

Q: Where do the student sample responses used in training come from and what was the procedure used to decide how to score them?

A: These responses were generated by students of New York State when items for the NYSTP were field tested. After the field tests were completed, teachers from all over New York State were invited to take part in rangefinding sessions to help determine how to apply the rubrics and arrive at scores for these responses. The generic and specific rubrics were discussed and then packets of randomly selected student responses were scored. The scores were recorded and any discrepancies were discussed and resolved using the rubric to arrive at resolutions. Measurement Incorporated then used these scored responses to create the training materials.