Alright guys. The main complaint of the discussion article was simply "hoax", yelled as loudly or as quietly as the user felt about it. Hopefully this won't get the same treatment.

We have been evaluating educational,  grant-funded programs for 20 years. Throughout these years, we have witnessed a slow change in how students are selected for academic services.  Traditionally, students were targeted for academic services and opportunities based on demographic characteristics—usually race and, until recently, family income status (based on free or reduced priced lunch). Wealthier, white students are given challenging lessons and tracked into the advanced courses, while their non-white and poorer peers are tracked low and given remediation services. The latter students are often referred to as “at-risk,” though we are finding more and more that the greatest risk these students face is being placed into inappropriate remedial courses which eventually bar them from access to advanced courses.  After students have been labeled “at-risk,” and then tracked inappropriately and provided unnecessary (and often harmful) remediation, their downward trajectory continues throughout their education. The demographic gap this creates continues to expand, despite the lip service and excessive tax and grant funds paid to eliminate—or at least lessen—this very gap. This “at-risk” model of assigning services is slowly being replaced by a “pro-equity” model. The driving force behind this change is the availability and use of data.

The literature is full of documentation that certain demographic groups have traditionally had less access to advanced math and science courses than equally scoring students belonging to demographic groups thought to be “not at risk.” Some examples from research follow.
•    Sixth grade course placement is the main predictor of eighth grade course placement, and social factors--mainly race---are key predictors of sixth grade course placement (O’Connor, Lewis, & Mueller, 2007).
•    Among low-income students, little is done to assess which are high achievers. Few programs are aimed at them, and their numbers are lumped in with “adequate” achievers in No Child Left Behind reporting. As a result, little is known about effective practices for low-income students (Wyner, Bridgeland, & DiIulio  Jr., 2007).
•    In a California school district, researchers found that of students who demonstrated the ability to be admitted to algebra, 100% of the Asians, 88% of the whites, 51% of the Blacks, and 42% of the Latinos were admitted (Stone & Turba, 1999).
•    Tracking has been described as “a backdoor device for sorting students by race and class.” Many researchers agree (Abu El-Haj & Rubin, 2009).
•    When course grades are used to determine placement, studies show that some students’ grades “matter” more than others. Perceptions of race and social class are often used to determine placement (Mayer, 2008).
•    Studies show that when schools allow students the freedom to choose which track they’ll take, teachers and counselors discourage many previously lower tracked students from choosing the higher track  (Yonezawa, Wells, & Serna, 2002).
•    The sequence of math students take in middle school essentially determines their math track for high school. In North Carolina, this is true because of math prerequisites for higher level math (North Carolina Department of Public Instruction, 2009).

We are seeing a move toward using objective data for placement into gateway courses, such as 8th grade algebra. Many school districts are beginning to use Education Value Added Assessment (EVAAS) and other data system scores that predict success in 8th grade algebra for criteria to enroll. This pro-equity model is replacing the traditional, at-risk model that relied on professional judgment.  One example of this is in Wake County, North Carolina. Superintendent Tony Tata attributed a 44% increase in the number of students enrolled in algebra to the use of the predictive software, EVAAS, to identify students likely to be successful. The success rate in the course increased with the addition of these students (KeungHu, 2012).

Although the pro-equity model of using objective data to assign students to more rigorous courses has proven successful, many people resist it. These people cling to the at-risk model, dismissing the objective data as inconclusive. Many of the overlooked students who were predicted to succeed, yet were placed in lower tracks (disproportionately minorities), are “weaker,” according to the old-school staff, and allowing these students into the gateway 8th-grade algebra course would be a disservice to them. (Not allowing them into this course ensures their bleak academic future.)  Review of the data had shown that strong students were being overlooked, and this objective use of data helps identify them (Sanders, Rivers, Enck, Leandro, & White, 2009).

The changes in education began with concern for aligning academic services with academic need. Aligning opportunities for rigor and enrichment is only just beginning. In the past, a large proportion of federal grant funds were for raising proficiency rates. In the at-risk model, grant funds were provided for services to the minority and poor demographic groups with the goals of raising academic proficiency rates. When we first started evaluating grant-funded programs, most federal grants were entirely in the at-risk model. The students were targeted for services based on demographic characteristics. The goals were to deliver the services to this group. Staff development was often designed to help staff understand children in poverty and what their lives are like, rather than helping them learn how to deliver an effective reading or math intervention. The accountability reports we were hired to write consisted of documentation that the correct demographic group was served, the program was delivered, and staff received their professional development. Proficiency rates were rarely a concern.

In 2004, the federal government developed the Program Assessment Rating Tool (PART) to provide accountability to grant-funded programs by rating their effectiveness.  The PART system assigned scores to programs based on services being related to goals, showing that the goals were appropriate for the individuals served, and student success measured against quality standards and assessments. PART rated programs that could not demonstrate whether they have been effective or not because of lack of data or clear performance goals with the rating “Results Not Demonstrated”  (U.S. Office of Management and Budget and Federal Agencies, n.d. "The Program Assessment Rating Tool") . In 2009, nearly half (47%) of U.S. Department of Education grant programs rated by the government are given this rating, thus illustrating the difficulties of making this transition to outcome based accountability (U.S. Office of Management and Budget and Federal agencies, n.d. "Department of Education programs"). The earliest changes were in accountability, not in program services or how to target students. Accountability reports began asking for pre- and post-comparisons of academic scores. For example, if funds were for raising the proficiency rates in reading, then evaluation reports were required to compare pre- and post-reading scores. This was a confusing period, because programs still targeted students based on demographic information and provided services that often had no research basis linking them to academic achievement; professional development often remained focused on empathizing with children in poverty, although the goals and objectives would now be written in terms of the participants raising their academic achievement to proficiency. We evaluators were often called in at the conclusion of programs to compare pre- and post-academic scores, and determine whether participants improved their scores to grade-level proficiency. We often saw the results of capable students treated like low-achievers, thought to have no self-esteem, and provided remedial work. Such treatment damaged the participants who had previously scored at or above proficient prior to services.

 A typical narrative of an evaluation might read:

 The goal of the program was to raise the percentage of students scoring proficient in reading. The program targeted and served low-income and minority students. Staff received professional development on understanding poor children. Services offered to students included remedial tutorials and esteem-building activities. When the program ended, pre-reading scores were obtained and compared with post-scores to measure progress toward the program objective.  At that time, it was discovered that a large percentage of participants were proficient prior to receiving services.

Rather than cite our own evaluations, we found many examples from the school districts reporting on themselves.

Accelerated Learning Program.

The following is a direct quote from a school system in North Carolina:

. . . Although ALP [Accelerated Learning Program] was designed primarily to help students reach proficiency as measured by End-of-Grade (EOG) tests, only 41.1% of those served showed below-grade-level scores on standard tests before service in literacy. In mathematics, 73.3% of students served had below-grade-level scores. ALP served about 40% of students who scored below grade level within literacy and within mathematics, with other services supporting many others. . . . Compared to those not served, results for Level I-II students were similar, but results for Level III-IV students were less positive. One third of non- proficient ALP mathematics students reached proficiency in 2008, compared to 42.1% of other students. (Lougee & Baenen, 2009).

Foundations of Algebra

This program was designed for students who fit specific criteria, yet it served many students who did not. Students who were below proficient or almost proficient were to be placed in courses to eventually prepare them for Algebra I. When criteria for placement are not met, determining program effectiveness is difficult, if not impossible. Students were likely entered into the program based on teacher recommendations, which were subsequently based on demographic factors such as race. The teachers “mistook” these students for below-proficient students when they were not. Had objective data, such as actual proficiency scores, been consulted, the proper students could have been served. The report indicates a success, as a higher percentage of these students than similar students who were not served enrolled in Algebra I. However, it is not known if this comparison group includes only students who actually meet the criteria, or if they are a heterogeneous mix of students of varying abilities. Missing data also makes program effectiveness evaluation difficult (Paeplow, 2010).

Partnership for Educational Success (PES)

This program was purportedly for students who are “at risk,” which is defined as students who scored below grade level on EOG (below proficiency)  and have been “identified by the PES team as having family issues that interfere with school success.” What is meant by “family issues” is unclear. The majority of students served are Economically Disadvantaged (ED) (91.3%) and Black (71.5%). More than half the students served, according to the evaluation, were at or above grade level on their EOGs when they began the program, thus making program effectiveness difficult to judge. The family component is an integral part of the program, and outside agencies visit families. Many community organizations are involved. But if the staff could miss so easy a datum as EOG scores for so many students, one has to wonder about such a subjective criterion as “family issues.” The program appears to have targeted ED students, with little regard to prior performance data. Data for many students (43.5%) was missing. Teachers indicate that parents of the targeted families have become more involved in the school, but little else has changed (Harlow & Baenen, 2004).

Helping Hands

Helping Hands was initiated based on data indicating that Black males lag behind other groups in academic achievement. The program is supposed to serve Black males, and most of the participants fit these criteria. The program is also designed to improve academics, and to curtail absenteeism and suspensions. Although the percentage of selected participants who needed improvement in these areas was higher than it was for the overall population of the students served, not all students served demonstrated a need for intervention. Many students were at grade level, were not chronically absent, and had not been suspended. Yet they were served because they were Black and male (Paeplow, 2009).

At Hodge Road Elementary School, students were tutored with remedial work in an after-school program. The only criterion the students had to meet to be allowed into the program was the inability to pay full price for their lunch. Their academic performance was irrelevant. (To be fair, these criteria were instituted by No Child Left Behind, and not the school system.) Most students were already reading and doing math at or above grade level (the two subjects for which tutoring was provided). The evaluation shows that giving remedial coursework to students who are at or above grade level, as if they were below grade level, can actually harm them. In the final statistics, 11.1% of Level III & IV 3rd through 5th graders scored below grade level after being served, compared with only 2% of a comparable group who were not served. An astonishing 23% of students in kindergarten through 2nd grade served who were at or above grade level prior to the tutoring scored below grade level afterward, compared with 8% of comparable students who were not served (Paeplow & Baenen, 2006).

AVID

AVID is a program designed for students who may be the first in their families to attend college, and who are average academic performers. The program, developed in the 1980s, maintains that by providing support while holding students to high academic standards, the achievement gap will narrow as students succeed academically and go on to successfully complete higher level education. Fidelity of implementation is often violated, which, as proponents admit on AVID’s own website (www.AVID.org) may compromise the entire program. Student participants must have a GPA of 2.0-3.5. We were asked to evaluate Wake County Public School Systems AVID program. Many students chosen for the program, however, did not fit the criteria (Lougee & Baenen, 2008). Because AVID requirements were not met, a meaningful evaluation was not possible.

This AVID program was implemented with the goal of increasing the number of under-represented students in 8th grade algebra. This was at a time when no criteria for enrollment in 8th grade algebra existed (i.e., a target to help the students reach didn’t exist), and high scoring students in this very group were not being referred for enrollment in algebra. Under these conditions, the program makes no sense. In summary, the goal of this program is to enroll in 8th grade algebra more low-income, minority, and students whose parents didn’t go to college. Only students recommended by teachers can enroll in 8th grade algebra. The data showed that very high-scoring, low-income and minority students were not being recommended for 8th grade algebra. Why do we think that students whose parents didn’t go to college can’t enroll in 8th grade algebra without being in an intervention program first? (Also, how it is determined that the students’ parents did not attend college is not addressed.) The program is for low-average students. They served high-average students. Then they still didn’t recommend them to be in 8th grade algebra. This program is very expensive. We have evaluated this program in many school districts and we find the same results, typically, as this report.

During this era, the interventions typically have not been related to the desired outcomes by research. For example, self-esteem-building activities were often provided to increase the odds of passing a math class, or to improve reading scores. Sometimes programs would be academic, but claims for success were not research-based, nor was the relationship between the activities and the desired outcomes. Although many interventions were at least related to the academic subject area the program  was trying to impact, it was not unheard of to see relaxation courses alone for increasing math test scores, or make-overs and glamor shots for raising self-esteem, which in turn would allegedly raise reading scores.

During the last decade, education has slowly moved toward requiring accountability in terms of comparing pre- and post-scores. We saw this causing confusion and fear, rather than clarity. More than once, when we reported to school districts that they had served significant numbers of students who were already at or above proficiency levels, they thought we were saying they had served high-income students instead of their target population of low-income students. We have seen many school systems assess their own programs, write evaluation reports like the examples above, and then continue to implement the programs without any changes. We have worked with some educators whose eyes were opened to the misalignment of services and needs, and they learned to use data, to identify appropriate interventions, and keep records to make accountability possible. We’ve seen these innovators close their achievement gaps while raising achievement of the top. But, those around them didn’t see this as replicable.

Race to the Top will impact the rate of change from the at-risk to the pro-equity model. Teacher and principal evaluations are going to include measures of growth in student learning (White House Office of the Press Secretary, 2009).  EVAAS will be used to measure predicted scores with observed scores. If high-achieving students who are predicted to succeed in 8th grade algebra are tracked into the less rigorous 9th grade algebra, they are not likely to make their predicted growth .

We are moving out of this era, and the pace of change toward identifying student needs using appropriate data is picking up. North Carolina’s newly legislated program, Read to Achieve, mandates that reading interventions for students in K-3 be aligned to the literacy skills the students struggle with, and that data be used to determine whether students are struggling with literacy skills. Schools must also keep records for accountability. Although this approach seems logical, it is quite innovative compared with the past reading interventions that targeted the wrong students (North Carolina State Board of Education; Department of Public Instruction, n.d.).

Education Grant programs are now requiring that applicants specify what data they will use to identify their target population, and how the intervention relates to helping the participants achieve the program goals. Staff development must relate to delivering the services well, and accountability must show that these things all happened correctly, while documenting progress toward the program objectives. It is a new era. We are not there yet, but it is coming.

 References
Harlow, K., & Baenen, N. (2004). E & R Report No. 04.09: Partnership for Educational Success 2002-03: Implementation and outcomes. Raleigh, NC: Wake County Public School System. Retrieved from http://www.wcpss.net/evaluation-research/reports/2004/0409partnership_edu.pdf
KeungHu. (2012). Wake County Superintendent Tony Tata on gains in Algebra I enrollment and proficiency. Retrieved from http://blogs.newsobserver.com/wakeed/wake-county-superintendent-tony-tata-on-gains-in-algebra-i-enrollment-and-proficiency
Lougee, A., & Baenen, N. (2008). E & R Report No. 08.07: Advancement Via Individual Determination (AVID): WCPSS Program Evaluation. Retrieved from http://www.wcpss.net/evaluation-research/reports/2008/0807avid.pdf
Lougee, A., & Baenen, N. (2009). E&R Report No. 09.27: Accelerated Learning Program (ALP) grades 3-5: Evaluation 2007-08. Retrieved from http://www.wcpss.net/evaluation-research/reports/2009/0927alp3-5_2008.pdf
Mayer, A. (2008). Understanding how U.S. secondary schools sort students for instructional purposes: Are all students being served equally? . American Secondary Education , 36(2), 7–25.
North Carolina Department of Public Instruction. (2009). Course and credit requirements. Retrieved from http://www.ncpublicschools.org/curriculum/graduation
North Carolina State Board of Education; Department of Public Instruction. (n.d.). North Carolina Read to Achieve: A guide to implementing House Bill 950/S.L. 2012-142 Section 7A. Retrieved from https://eboard.eboardsolutions.com/Meetings/Attachment.aspx?S=10399&AID=11774&MID=783
O’Connor, C., Lewis, A., & Mueller, J. (2007). Researching “Black” educational experiences and outcomes: Theoretical and methodological considerations. Educational Researcher. Retrieved from http://www.sociology.emory.edu/downloads/O%5c’Connor_Lewis_Mueller_2007_Researching_black_educational_experiences_and_outcomes_theoretical_and_methodological_considerations.pdf
Paeplow, C. (2009). E & R Report No. 09.30: Intervention months grades 6-8: Elective results 2008-09. Raleigh, NC: Wake County Public School System. Retrieved from http://www.wcpss.net/evaluation-research/reports/2009/0930imonths6-8.pdf
Paeplow, C. (2010). E & R Report No. 10.28: Foundations of Algebra: 2009-10. Raleigh, NC: Wake County Public School System. Retrieved from http://assignment.wcpss.net/results/reports/2011/1028foa2010.pdf
Paeplow, C., & Baenen, N. (2006). E & R Report No. 06.09: Evaluation of Supplemental Educational Services at Hodge Road Elementary School 2005-06. Raleigh. Retrieved from http://www.wcpss.net/evaluation-research/reports/2006/0609ses_hodge.pdf
Sanders, W. L., Rivers, J. C., Enck, S., Leandro, J. G., & White, J. (2009). Educational Policy Brief: SAS® Response to the “WCPSS E & R Comparison of SAS © EVAAS © Results and WCPSS Effectiveness Index Results,” Research Watch, E&R Report No. 09.11, March 2009. Cary, NC: SAS. Retrieved from http://content.news14.com/pdf/sas_report.pdf
Stone, C. B., & Turba, R. (1999). School counselors using technology for advocacy. Journal of Technology in Counseling. Retrieved from http://jtc.colstate.edu/vol1_1/advocacy.htm
U.S. Office of Management and Budget and Federal Agencies. (n.d.). The Program Assessment Rating Tool (PART). Retrieved from http://www.whitehouse.gov/omb/expectmore/part.html
U.S. Office of Management and Budget and Federal agencies. (n.d.). Department of Education programs. Retrieved from http://www.whitehouse.gov/omb/expectmore/agency/018.html
White House Office of the Press Secretary. (2009). Fact Sheet: The Race to the Top. Washington D.C. Retrieved from http://www.whitehouse.gov/the-press-office/fact-sheet-race-top
Wyner, J. S., Bridgeland, J. M., & DiIulio  Jr., J. J. (2007). Achievement trap: How America is failing millions of high-achieving students from low-income families. Jack Kent Cooke Foundation, Civic Enterprises, LLC. Retrieved from www.jkcf.org/assets/files/0000/0084/Achievement_Trap.pdf
Yonezawa, S., Wells, A. S., & Serna, I. (2002). Choosing tracks:“Freedom of choice” in detracking schools. American Educational Research Journal , 39(1), 37–67.

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 1:28 PM

This comment by jimrandomh is probably still relevant. Excerpt:

(To all the other commenters in this thread: this is one of those cases where you should be providing actionable options to the original poster, not going meta, not expressing outrage, not trying to collect the information to act in his place. Comment as a consequentialist, not as a conversationalist.)

We have worked with some educators whose eyes were opened to the misalignment of services and needs, and they learned to use data, to identify appropriate interventions, and keep records to make accountability possible. We’ve seen these innovators close their achievement gaps while raising achievement of the top. But, those around them didn’t see this as replicable.

Do you mean substantially reduced, or actually closed? All of the other claims are familiar and have citations, but this one doesn't, and a data-driven process to fully close achievement gaps would be a major advance, and yet one that has gone unreported (or very underreported) in the academic literature.

You could email good academics studying achievement gaps, e.g. economist Roland Fryer at Harvard has a number of studies on randomized trials of gap-closing interventions, and hasn't yet found something so effective. He also has demonstrated he can get large studies done and funded. Regardless of what local schools think, if you can show good data (or work with them to help them get such data without violating NDAs) confirming this to such academics then you will get a significant response and national attention from wonks and foundations to help scale up better practices.

Fryer would be able to get much more done than we can here.

So, remedial programs are designed for poor-performing students, and applied to economically-poor students?

As a special case of the more general rule:

"Programs and interventions originally meant to help increase general performance, especially among poor-performance students, are applied to a naive heuristic.

Implementation of the program becomes the goal of the program, rather than an increase in performance.

Whether the heuristic is accurate or not is ignored, the results of evaluations are ignored, and the effect on performance (if any) is also ignored."

Not having come from the discussion post, I'm a bit confused as to who 'you' are and what role you (or your organization) plays. I know you've stated that you're trying to remain anonymous, but I'd like to know some of the basic facts: do you do these evaluations for profit and how are you funded?

According to the discussion post, the author works/worked in a firm that evaluates grant requests; the discussion article dealt mainly with the author pointing out how horribly irrational the funding requests were at meeting goals related to improving education (they were rational enough if their only goal was to get funding; the primary example called for vastly more money than was actually needed to accomplish the proposed activities).

I don't remember any further details about the author's job/what organizations ey is involved with.

Yes, we are for-profit. Most grants stipulate that some proportion of the grant money be spent on an evaluation of the project.