Educational Research: Be Certain It Meets the Quality Standard

educational researchOne day, a comedic teacher challenged his colleagues to adopt the latest innovation in education. During his conference period, while in the teacher’s lounge, he claimed that some Ivy League schools in the Northeast discovered that students’ intelligence quotients (IQ) grew exponentially when teachers wore funny hats.

A couple of the trend-setting teachers decided to try it with their problem classes. Low and behold, they seemed well-behaved.

Through word of mouth a few more teachers bought in to the new system. Reports of improved test scores resounded throughout the building. Teachers wore pointed hats, hats with shocking colors, and hats with plumes sticking out the side. They tested which hat received the best results. Next, the principal shared their success stories with other principals. Entire districts adopted the new system. Colleges began to embrace it and produce “How To” manuals. After two years, it became evident that this did not produce the results it promised. So one-by-one, schools abandoned the “Funny Hat” method.

Educational Research and the Lunacy of Some School Reform

Such lunacy, although somewhat exaggerative, is all too reflective of the way school reform at times may actually occur. Too often, changes are made in schools based on spurious research, transitory test-scores, the publish-or-perish pressure placed upon college professors, or multiplied millions of dollars to be made by creating a popular teaching method. Meanwhile, children are subjected to these countless buffooneries.

Myron Lieberman, in his profound work, Public Education: An Autopsy, rightly postulated, “Like individuals, social institutions die, and their death forces us to face an uncertain future… we cannot always wait until rigor mortis sets in to consider what should be done to meet the new situation.” The present stench of failing schools is from the pungent odor of a rotting corpse that died from a thousand cuts of sloven mishandling of research.

Educational Research and the Infernal Cycling of Fads

One of the maladies of education is the infernal cycling of fads that leave seasoned veterans with a bitter taste toward research, rendering quality school improvement nearly impossible. The latest and greatest method that promises utopian results in learning often results in little more than a placebo effect. After a few of these iterations, jaundiced-eyed faculty view the expert who touts the latest be-all-end-all method, little more than snake-oil salesmen.

This, in turn, provokes administrators who are pressured by the high-stakes test environment to view faculty as entrenched and unwilling to change. In turn, school leaders hurriedly heap up research on how to change corporate culture to break the log-jam of resistance. They mobilize, organize, and implement the latest research to move the school forward.

Whether or not the cause of failure was due to faulty research itself or unsound application, failure begets failure. When another attempt at implementing research fails, dulled faculty become even more passively aggressive to administrative pressure. As a result, these research “fidgets” often ends with minimal improvement after countless hours, energy, and precious funds have been spent.

More pressure is piled on, as school administrators have to then account for the lackadaisical results to school boards. Frantic school leaders start asking themselves, “What’s wrong with us?” Investigative committees are formed. Teachers too begin reflecting, “Why doesn’t this work in our district? What is wrong with us?”

The downward spiral continues as each stakeholder group identifies a parade of scapegoats. It didn’t work in our school because we have too many students who are low socio-economic status (SES). We do not have quality teachers. Our parents are unsupportive. Our administrators are incompetent. On and on go the victims’ lists of scapegoats.

When the self-flagellations finally end, staggering school leaders and teachers often point to faults within the local school system. Whereas their attempts to assuage guilt may prove both unproductive and unnecessary, when at the heart are faulty research practices throughout the education landscape.

The Heart of the Malady: Faulty Educational Research Practices

Education practitioners are far too ready to believe whatever marches under the banner of “research.” We are too ready to swallow whatever pill is handed to us, if the person has the right pedigree, works at some prestigious research facility or university.

The pressure to publish at universities or research facilities is intense. Professors must often publish to be considered for tenure or lose their candidacy. Researchers must concoct results in order to get that large federal grant. Keynote speakers must find, understand, and celebrate research for the next gig, otherwise the opportunities will dry up. Administrators must grab on to the latest method that has worked elsewhere to justify to the board that they are pro-actively addressing problems.

As such, there is a systemic culture, which fuels ongoing quick, shoddy research practices in the field of education. The field is rife with published gyrations of invalid or unreliable research.

Educators have lost the value of developing a critical eye regarding research and researchers. Beyond this, they have slovenly accepted pitches from product salespersons who masquerade as educators. All the while, these hucksters sell only the research that supposedly validates whatever they’re selling.

Recently I was given an article citing various sources of research from an organization that touts effective instruction to the learning difference (LD) child. Low and behold, the article claimed the research was clear that students behave better by individualized behavior plans, affirming tone, and positive rewards.

Although the research and article may or may not be true, I would have been much more impressed with the research if it had come from the Americans to Restore Corporal Punishment. Better yet, if the research showed the LD child can be taught with the same modalities as any other child. It is far too convenient to be selective in what one deems “research,” to confirm one’s own prejudices and biases.

Educational Research: Asking the Critical Question

Prior to working for over thirty years in the education field, I made a living in construction. I never will forget my early carpentry experience. Even the most novice carpenter quickly learns the value of, “measure twice, cut once.” All it takes is ruining hundreds of dollars of product because of miscalculating its length.

At San Marcos Academy, we strive to develop a “measure twice, cut once” approach to research. Even when the research itself bears out to be both valid and reliable, we continue to ask the hard questions. “Is it quality research? Were the results in fact valid and reliable? Is there a direct correlation between those results and our students?” Once those have been answered, we start with pilot programs to test it in the field before we spread it campus wide. We include the appropriate stakeholders in the learning process. Once we have kicked all of the tires and are satisfied that it would be good for all of our students, we start expanding the implementation.

Because our school is filled with seasoned educators who have been made to run on research pinwheels like hamsters, we do not genuflect to everything claiming to be research. We don’t jump on every bandwagon that toots the horns for the latest fad in education research. Instead, we are intentional, deliberative, and thorough. When we expand a pilot to the campus-at-large, it carries high success rates. As a result, teachers and staff work together to see it become a successful part of our students’ learning.

–By Bob Bryant, Academic Dean, San Marcos Academy

Comments are closed.