This year, I have to subject my eighth-grade students to 6,600 minutes of district-mandated testing. That’s 110 hours. That’s the equivalent of 15 entire seven-hour school days. Eighth grade students in Hartford Public Schools are required to take 25 mandated district and state-wide assessments between late August and early June. Thirteen percent of the entire school year is dedicated to administering these “high stakes tests.”
One could argue that all of this testing leads to data that could drive instruction, that could inform teaching, that could allow for innovative and creative responses to student needs. However, over-testing students waters down the tests’ effectiveness, usefulness, and integrity.
But what are the actual results? Are truancy or disengagement rates down? Are Hartford students getting accepted to colleges at higher rates? Are these tests correlative to students’ success after high school? No. Absolutely not.
However some results are clear. Students are, in large numbers, experiencing test fatigue, anxiety, disengagement, detachment, and apathy towards school. Could it be that by constantly testing students we are actually preventing them from crucial life skills such as problem-solving, collaboration, and experiential learning?
Within the first eight days of school this year, I was asked to give three different district-mandated assessments. Historically, the first eight days of school have been used to learn students’ names, introduce them to classroom expectations and routines, and establish a rapport that should foster learning, security, and engagement. In what ways does immediately assessing students damage a teacher’s ability to create a safe and supportive learning environment for our students?
Yet, teachers are rarely included in the decision process when it comes to education and even less likely to be included in conversations about assessments. Although Hartford allowed teachers to participate in the creation of some of the district- mandated results, they were not consulted in the frequency nor timing of any of the assessments. As a professional, I feel that it undervalues my expertise to be excluded from these decisions. I cannot imagine how disempowering it must feel for the students, the victims of a systems that commits so much time to testing over learning.
Tiffany Moyer-Washington is a Hartford Public School teacher.
Free to Read. Not Free to Produce.
CT Mirror is a nonprofit newsroom. 90% of our revenue is contributed. If you value the story you just read please consider making a donation. You'll enjoy reading CT Mirror even more knowing you publish it.
The truth of the matter is that the Hartford Public School system is failing on every measure of performance. Yes, there are high levels of student absenteeism and an equal amount of calling in sick from teachers. The collective bargaining units are a prime cause of the education problems and a complete lack of parent involvement in their child’s education is another. The data overwhelmingly indicates students receiving a high school diploma from a Hartford Public School have he equivalent of 10th grade education. So, their only access to postsecondary education is a community college. Review the students grades in the key disciplines: Math, English and Science. In nearly every case the grades are D’s and C’s. Why, because it is very difficult for teachers to grade inflate. In these disciplines either you know the correct answer or you don’t.
Thank you to this teacher for a very poignant and truthful description of the dismal condition of public education. Not just in Hartford but everywhere. The union controlled bureaucracy has stolen every teacher’s creative ambition. It’s time for the teachers to speak up and take back their profession.
Too much testing is an inhibitor to student success…I think not!
Following is an except from an article in “The Hartford Coutant, dated 17 Dec 2018.
“Hartford’s public schools outranked those of other major Connecticut cities. In New Haven, 20 percent of the district’s 21,500 students were chronically absent last year. In Bridgeport, about 19 percent of the district’s 20,900 students amassed enough absences to qualify.”
The problem isnt testing, it’s truancy and lack of commitment.
Maybe truancy is the direct result of excessive testing and other factors. Who knows? We don’t collect the data that would give us any insight into this dynamic.
After billions and billions of dollars isn’t it abundantly clear to everyone that it isn’t testing, money or “programs” that are the solution.
The only reason for the chronic poor performance of these children is the multi-generational lack of two parent homes. Has a “study” ever been done of how many children are being raised by a single parent (usually the mother)?
How many by a relative? How many are in households with siblings from other parental fathers but who are absent? How many older siblings of school age children graduated HS? College?
After four generations of this, when will there be an honest discussion around this?
An “honest” discussion? On what basis are you claiming that the “only” reason for poor academic performance is single parent households?
Billions and billions of dollars haven’t worked. How about asking what other factors might be a play which has led to 40 years of zero progress in academic achievement?
Are you in favor of continuing things that don’t work?
See my comment above. We don’t make any systematic effort to understand the dynamics that drive educational outcomes. Testing is a very very small part of the story–there are a host of additional factors that impact student performance and what happens in the classroom. CT once had the opportunity to track all of these factors and learn what works and what doesn’t, but abandoned the effort. Other states are way way ahead of us in working to understand the complex interactions that impact educational outcomes–but so far as I know, Connecticut is making no such effort. But that is hardly surprising: in general Connecticut prefers not to know and has some of the worst administrative data in the nation.
I agree with you on head in the sand mentality but I also believe that there is a deliberate avoidance of conducting and publishing a study that might assign accountability to the parents of the poor performing kids. It’s much easier to agree with their victimhood and throw billions of dollars at the problem and claim victory.
I actually think it’s a very simple study – probably less than 10 questions and it will point to the breakdown of the family as the highest single predictor of lack of achievement. Daniel Patrick Moynihan predicted it 50+ years ago. And he was unfortunately right.
Thank you for raising this issue Ms. Moyer-Washington. The excessive testing can’t be good for students or teachers.
While Ms. Moyer-Washington brings up some good points, the use of data to drill into students’ individual needs is well documented. Testing without analysis is a waste of time. Certainly, some of the start of year data can be drawn from the students’ performance the prior year.
I am certain that in a large district such as Hartford, consulting with all the teachers would be a logistical nightmare. However, the lead teachers, academic specialists, and school and district administrators are all teachers at different career stages. Their teaching experience in planning academic programs, which includes assessments, should not be discounted.
Yes, “data” is critical–but what data is being collected and what is being tracked over time? I suspect often it is not being evaluated in any comprehensive way, it is not clearly linked to the design of the curriculum, and is not being linked to other critical variables–e.g. the level of disciplinary issues in the school, family characteristics, truancy/attendance. A classroom in which there is shifting attendance is inherently disruptive; one in which there are a lot of students is inherently less effective than a smaller class; one in which there are frequently disciplinary issues is less effective. Unless you track these attributes, you don’t know very much. See my comment above–we once were on track to record such critical data, but preferred flying blind–one of our steady habits it seems.
The belief in the effectiveness of smaller class sizes may be exaggerated:
Now, a new October 2018 review of class-size research around the world finds at most small benefits to small classes when it comes to reading. In math, it found no benefits at all.
https://www.usnews.com/news/education-news/articles/2018-11-05/review-of-research-finds-small-benefits-to-small-classes
I agree about the significance of the other factors you identify.
This looks like a serious misallocation of resources–of teacher time and student attention. More critically, over time, does it generate useful outcomes? I suspect we don’t know–because Connecticut so far as I know does not track student/classroom/school performance over time. More than a decade ago CCEA constructed a retrospective longitudinal study that tracked 170,000 CT high school students from high school through college (if they went) and into the workforce. But what is really important is the data architecture would have permitted tracking ALL students K-16 and into the workforce, using a vast array of data, from test scores to curricular design to school characteristics. It would have given us a critical tool to see what works and what doesn’t. But continuation was killed off: CT has a long history of flying blind, and apparently we remain deeply committed to now knowing what we are doing and what works.
The information you identify would be very useful in measuring outcomes. CT should maintain that data.
But the study may not affect future education planning much. The problem is, approaches change. The conclusions from the study can easily be met with the objection that schools are operating differently now, so bad results prove only the failure of the past. And good results would only indicate that at least some changes weren’t necessary. That would be unsatisfying to proponents.
The reason for conducting the study is to obtain facts. If those facts are likely to be inconvenient, then support for the effort is likely to be insufficient.
Is Hartford comparing the June tests with the August tests? Often students from low-income households loss a large share of what they learned over the summer; a study in Baltimore showed those students in aggregate learned much more than students from high-income households, but systematically lost a large share of what they gained each academic year. While I agree that we devote far too much time to testing, I also wonder what we do with the data. Do we learn anything from it? Do we redesign the curriculum to see what generates better outcomes? It isn’t clear there is any useful feedback.
If fewer students failed these tests, there would probably be fewer tests. When the problems have been identified, students can be drilled until results improve. Both parts of this plan reduce the time available for more enriching activities, but if schools’ first job is to inculcate a defined set of facts and procedures, then testing and remedying may be required.
There may be too many tests. But there may be the number necessary to measure students’ current learning on a range of topics. More information would be necessary to a determination. Given that districts can’t enjoy the time and effort taken by testing, the argument likely has two sides.