Appendix 6-11

The Friedman Model
Results Accountability Framework (www.resultsaccountability.com)

 

            Mark Friedman’s Results Accountability Framework is based on a four-quadrant conceptualization of program performance measures, which address quantity and quality of inputs or what we do (Effort) and the quantity and quality of outputs or impact (Effect). The model attempts to answer important, evaluative questions: “How do we know if we are doing badly?” “How do we know what ‘better’ is? and “Is anyone better off as a result of what we do?” Friedman begins his discussion of the model by clarifying concepts of the model; “results,” “indicators,” and “program performance measures.”

            Results are defined as a condition of “well-being” for children, families and communities. They are matters of common sense that are about basic desires of citizens and the fundamental purposes of governments and cross over agencies and programs. Results of this type typically have “staying power;” they aren’t likely to change over many years and they are the right place to start to begin to figure out how to get “there” from “here.” An example related to this work might be teachers remaining in their jobs for more than five years. 

Indicators are measures that quantify the achievement of the desired result. They assist in answering the question, “How would people know a result if they achieved it?” Indicators can be useful in creating a report card on progress towards the result. Indicator baselines are created and can then be used to project trend lines. One indicator that “teachers are remaining in their jobs” would be the teacher retention rate measured at specific points in time. Another might be related to the strategies used to encourage retention such as induction and mentoring programs. Indicators might then be related to the quantity and the quality of the induction/mentorship programs (see Appendix 6-12). 

Performance Measures assess the overall effectiveness of program service delivery. Does the program work the way it should? As described by Friedman, there are distinctions between the ends and the means. Results and indicators are about the ends. Strategies are the means to get there from here and performance measures indicate whether the individual strategies are having the desired impact to achieve the intended results.

When used as an evaluation framework, implementers of the model need to agree on core program performance measures. Friedman encourages users to choose indicators and measures which meet three criteria: 1) communication power ¾ they communicate to a broad range of audiences, 2) proxy power ¾ they say something important about the result and/or bring along the rest of the “data herd,” and 3) data power ¾ there is quality data available on a timely basis. When these criteria are used to determine performance measures, generally short lists of four to six measures are developed.

 

  

Program Performance Measures
Questions About Service Delivery
 

 

Quantity

Quality

Input/Effort

How
Much Service Did
We Deliver?

 

How Well
Did We Deliver
Service?

 

Output/Effect

How Much
Effect/Change Did
We Produce?

What Quality of
Change/Effect Did
We Produce?

 

 

            Friedman uses a four-quadrant table format to illustrate his concepts of “effort” and “effect.” The Y axis elements describe the quantity and the quality of services delivered. The X axis elements describe the input in terms of effort and the output in terms of effect. When the quadrants are illustrated graphically, the following measures are depicted: 1) quantity of effort, 2) quality of effort, 3) quantity of effect, and 4) quality of effect. School evaluation teams will need to identify performance measures across these four dimensions that answer the following critical performance questions: 

 

            The quality of input or efforts (Quadrant #2) are often easily measured (e.g., percent of participants indicating satisfaction with a service), however the quality of output or effect is more difficult to capture as the program may have less control over the variables which produce the effect (Quadrant #4). Types of measures in each quadrant are depicted below. Examples specific to “Keeping Quality Teachers” can be found in Appendix 6-12. 

 Separating the Wheat from the Chaff
Types of Measures Found in Each Quadrant

How much did we do?

 # Clients/customers served

 # Activities (by type of activity)

 

 

How well did we do it?

 % Common measures

(e.g., client-staff ratio, workload ratio, staff turnover rate, staff morale, % staff fully trained, % clients seen in their own language, worker safety, unit cost)

 % Activity-specific   measures

(e.g., % timely, % clients completing activity, % correct & complete, % meeting standard)

Text Box: Point-in-Time
Vs.
Point-to-Point
Improvement

Is anyone better off?

% Skills/Knowledge
(e.g., parenting skills)

% Attitude
(e.g., toward drugs)

% Behavior
(e.g., school attendance)
 

% Circumstance
(e.g., working, in stable housing)

 

 

How do we get from talking about results to doing something about them?

1.     Identify and establish a mutually agreed upon set of results.

2.     Select indicators which measure and communicate whether the results are being met.

3.     Establish a baseline, reporting the “story behind the baseline” or the history, and develop forecasting trend lines.

4.     Review strategies and resources to assist in turning the curve away from the baseline.

5.     Involve partners in implementing research-based strategies to produce the desired results.

6.     Begin implementation of the selected strategies while continuing to look for new ones that will stand the test of time.

7.     Use a feedback loop to review success of the strategies and correct as needed. “Success equals beating the baseline.”

 

            The strength of this evaluation model lies in the district’s ability to assess its progress across the three research-based strategies (e.g., the role of the administrator, working conditions, induction and mentoring); select specific, targeted strategies/activities to affect the area of lowest performance; conduct evidenced-based evaluation using the Friedman model; and finally, perform post-implementation assessment using the general survey instrument (Appendix 6-13). The model follows Reeves recommendation that “it is more important and accurate to measure a few things frequently and consistently than to measure many things once.” Additionally, the data are easily reportable and presented in a user friendly format (p. 25, Accountability for Learning: How teachers and school leaders can take charge by Douglas B. Reeves – ASCD, 2004).