But now, a little-known office in the Education Department is starting to get some real data, using a method that has transformed medicine: the randomized clinical trial, in which groups of subjects are randomly assigned to get either an experimental therapy, the standard therapy, a placebo or nothing. The findings could be transformative, researchers say. For example, one conclusion from the new research is that the choice of instructional materials — textbooks, curriculum guides, homework, quizzes — can affect achievement as profoundly as teachers themselves; a poor choice of materials is at least as bad as a terrible teacher, and a good choice can help offset a bad teacher’s deficiencies. So far, the office — the Institute of Education Sciences — has supported 175 randomized studies. Some have already concluded; among the findings are that one popular math textbook was demonstrably superior to three competitors, and that a highly touted computer-aided math-instruction program had no effect on how much students learned. Other studies are under way. Cognitive psychology researchers, for instance, are assessing an experimental math curriculum in Tampa, Fla. The institute gives schools the data they need to start using methods that can improve learning. It has a What Works Clearinghouse — something like a mini Food and Drug Administration, but without enforcement power — that rates evidence behind various programs and textbooks, using the same sort of criteria researchers use to assess effectiveness of medical treatments. Without well-designed trials, such assessments are largely guesswork. “It’s as if the medical profession worried about the administration of hospitals and patient insurance but paid no attention to the treatments that doctors gave their patients,” the institute’s first director, Grover J. Whitehurst, now of the Brookings Institution, wrote in 2012. But the “what works” approach has another hurdle to clear: Most educators, including principals and superintendents and curriculum supervisors, do not know the data exist, much less what they mean. A survey by the Office of Management and Budget found that just 42 percent of school districts had heard of the clearinghouse. And there is no equivalent of an F.D.A. to approve programs for marketing, or health insurance companies to refuse to pay for treatments that do not work. Nor is it clear that data from rigorous studies will translate into the real world. There can be many obstacles, says Anthony Kelly, a professor of educational psychology at George Mason. Teachers may not follow the program, for example. “By all means, yes, we should do it,” he said. “But the issue is not to think that one method can answer all questions about education.” In this regard, other countries are no further along than the United States, researchers say. They report that only Britain has begun to do the sort of randomized trials that are going on here, with the assistance of American researchers. As Peter Tymms, the director of the International Performance Indicators in Primary Schools center at Durham University in England, wrote in an e-mail: “The wake-up call was a national realization, less than a decade ago,” that all the money spent on education reform “had almost no impact on basic skills.” Suddenly, scholars who had long argued for randomized trials began to be heard. In the United States, the effort to put some rigor into education research began in 2002, when the Institute of Education Sciences was created and Dr. Whitehurst was appointed the director. “I found on arriving that the status of education research was poor,” Dr. Whitehurst said. “It was more humanistic and qualitative than crunching numbers and evaluating the impact. “You could pick up an education journal,” he went on, “and read pieces that reflected on the human condition and that involved interpretations by the authors on what was going on in schools. It was more like the work a historian might do than what a social scientist might do.” At the time, the Education Department had sponsored only a few randomized trials. One was a study of Upward Bound, a program that was thought to improve achievement among poor children. The study found it had no effect. So Dr. Whitehurst brought in new people who had been trained in more rigorous fields, and invested in doctoral training programs to nurture a new generation of more scientific education researchers. He faced heated opposition from some people in schools of education, he said, but he prevailed. The studies are far from easy to do. “It is an order of magnitude more complicated to do clinical trials in education than in medicine,” said F. Joseph Merlino, president of the 21st Century Partnership for STEM Education, an independent nonprofit organization. “In education, a lot of what is effective depends on your goal and how you measure it.” Then there is the problem of getting schools to agree to be randomly assigned to use an experimental program or not. “There is an art to doing it,” Mr. Merlino said. “We don’t usually go and say, ‘Do you want to be part of an experiment?’ We say, ‘This is an important study; we have things to offer you.’ ” As the Education Department’s efforts got going over the past decade, a pattern became clear, said Robert Boruch, a professor of education and statistics at the University of Pennsylvania. Most programs that had been sold as effective had no good evidence behind them. And when rigorous studies were done, as many as 90 percent of programs that seemed promising in small, unscientific studies had no effect on achievement or actually made achievement scores worse. For example, Michael Garet, the vice president of the American Institutes for Research, a behavioral and social science research group, led a study that instructed seventh-grade math teachers in a summer institute, helping them understand the math they teach — like why, when dividing fractions, do you invert and multiply? The teachers’ knowledge of math improved, but student achievement did not.
This article has been revised to reflect the following correction:
Correction: September 9, 2013
An article on Tuesday about using randomized clinical trials to study what works and what doesn’t in teaching science and math misstated the number of randomized trials that had been sponsored by the Education Department at the time the Institute of Education Sciences, an office within the department, was created in 2002. There had been a few randomized trials, not “exactly one.”
View the original article here
No comments:
Post a Comment