Does Online Instruction Work?
(Part 3)
Larry Cuban is a former high school teacher, superintendent and university professor. He writes on classroom teaching, the history of school reform, the use of technology in K-12 and how policy is translated into practice. In part three of his blog series on Online Instruction, Larry looks at the research used to justify the implementation of online instruction. His blog post is re-posted here with permission.
The first two part of this series can be found here: Part 1, Part 2.
Here is the fundamental question that public policymakers (e.g., federal and state officials, local school board members and superintendents) have to answer when making decisions that involve children and youth compelled to attend public school. Such a question, however, about the effectiveness of online instruction in raising student’s academic achievement and producing other desirable outcomes such as increased attendance, higher graduation and lower dropout rates, and college admissions—that is what I mean by “work”– gives educational leaders heartburn.
Why heartburn? Because of the tortuous role that research plays in policymaker decisions about adopting and implementing technologies in schools, especially the current clamor for online instruction. Over the past few decades, there have been thousands of K-12 studies that have sought an answer to the question.
The answers provided by scores of studies have been contested because most have had serious design and methodological flaws. Moreover, many of these studies lumped together full-time virtual schools, hybrids, and online courses, And the results have been underwhelming. That is where heartburn enters the picture.
[i]
Even when researchers over the past few decades have performed meta-analyses of a smaller number of studies that have met higher standards of quality they found that virtual instruction in its various modes, at best, is equivalent to regular face-to-face classroom instruction. At worst, some studies showed less achievement gains than traditional teaching. And keep in mind that these meta-analyses were of studies where online instruction occurred in mostly math, reading, and science courses—not other academic subjects. The overall picture is considerably less than promoters of full- and part-time virtual schooling have promised or leaders had expected.
[ii]
What complicates matters is that findings drawn from research studies on the effectiveness of online instruction are only one of many interlocking tiles in a mosaic that policymakers assemble in judging whether to adopt virtual instruction for children and youth. Policymakers are often torn by the push-and-pull of conflicting impulses over determining what kinds of evidence of effectiveness matter and how much evidence is necessary to inform, shape, and justify a policy decision?
[iii]
Consider the push impulse for evidence. Using research and other forms of evidence to make a decision is rational, a value highly prized in life much less for policymaking where the stakes in power, influence, and resources run high. Gathering, sifting, and analyzing data to make a personal, professional, or organizational decision is what is expected of those in decision-making posts. So when it comes to public policy decisions, in the best or worst of fiscal times, policymakers have to make a strong public case anchored in ample evidence to convince voters and key stakeholders (e.g., school boards, chambers of commerce, teacher unions, state officials, etc.) to buy and deploy new technologies in classrooms and have students use them regularly. Evidence, including research studies, that these technologies will help students learn more, faster, and better is expected. Furthermore, current top-down pressure among business, civic, and educational leaders for “research-based practice” and “data-driven decisions” hammers school decision-makers to have solid proof in their pockets or snazzy PowerPoint presentations filled with studies that tout the effectiveness of students receiving online instruction.
[iv]
Where the pull impulse enters the policy picture is that these very same policymakers have an equally strong tug to buy and deploy glittering new technologies as quickly as possible without waiting for researchers to come up with supportive findings. At professional conferences, national leaders pitch the virtues of “revolutionary” changes springing from virtual instruction and placing new technologies in classrooms. Pulled by first-hand experiences and stories of “transformations” in student learning and astute marketing by vendors, policymakers, technology leaders and school officials do not need to read research studies, visit other districts, or attend more conferences. They know in their gut the answer to the question of what works; they have faith in their intuition and, like entrepreneurs and ambitious decision-makers elsewhere, they want to forge ahead and implement virtual schooling swiftly because they believe it will help students learn.
These policymakers are not irrational. There is a political logic in mandating online courses for every student as a graduation requirement, starting pilot tablet and laptop programs, and encouraging a principal and cadre of teachers to create a technological innovation tailored to their school They consult with key stakeholders in the community before inviting charter management organizations like Rocketship Schools to establish blended learning programs in their schools. These decision-makers do not need researchers to tell them that these new technologies “work.” They believe in their heart that they will work. Push-and-pull conflicting urges pit solid research studies against strong beliefs and leave unanswered the question of what kinds of evidence matter. Too often beliefs trump facts.
[v]
In asking the fundamental policy question that too often goes unanswered about whether online instruction is effective in teaching and learning, it helps to examine other reforms that have “worked” where responsible decision-makers did not rely on stories, beliefs, and intuitive snap judgments but, instead, were guided by solid research evidence.
Take preschool education. Well-designed study after study of three and four year-olds who were in preschool programs (e.g., Perry pre-schools, Abecedarian) followed their progress through schools and into adulthood. These studies show short- and long-term gains in academic achievement, adult behaviors, and post-graduation earnings. Or consider the research and evaluations of career-technical academies where students get prepared for both college and career. Researchers doing experimental and quasi-experimental studies have found over the past four decades a range of positive student outcomes for graduates of these programs.
[vii]
Considering these examples where research studies in K-12 schooling clearly show that certain investments in programs and practices work, why do so few policymakers who buy, deploy, and mandate different forms of virtual schooling seldom cite studies to determine whether online instruction is effective? Why is the return on investment of taxpayer monies so often ignored?
Three reasons come to mind.
Research results are scant and mixed. As stated above, the results of meta-analyses of K-12 studies do not show a decided edge for students taking online courses or in virtual full-time schools performing even marginally above students who are in teacher-led classrooms. More striking, however, is that only a few studies of virtual instruction in K-12 schools meet the minimum quality threshold for design, sampling, and methodologies. From a recent (and often cited) meta-analysis of studies, researchers stated:
Few rigorous research studies of the effectiveness of online learning for K–12 students have been published. (original italics).
A systematic search of the research literature from 1994 through 2006 found no experimental or controlled quasi-experimental studies comparing the learning effects of online versus face-to-face instruction for K–12 students that provide sufficient data to compute an effect size. A subsequent search that expanded the time frame through July 2008 identified just five published studies meeting meta-analysis criteria.[viii]
The authors of the meta-analysis conclude that these five studies:
comprises a very small number of studies, especially considering the extent to which secondary schools are using online courses and the rapid growth of online instruction in K–12 education as a whole. Educators making decisions about online learning need rigorous research examining the effectiveness of online learning for different types of students and subject matter as well as studies of the relative effectiveness of different online learning practices.[ix]
In short, given the results of the few K-12 studies that have been done, there is clearly insufficient evidence to launch major online initiatives in either elementary or secondary schools. For those policymakers who seek to appear rational and prize research findings, the pantry is nearly empty.
The research being done is shoddy. While only a few analyses of online instruction approach the gold standard of experimental or quasi-experimental studies, a great deal of research has been (and continues to be) done. Unfortunately, much of it is poor quality. Most studies fall far below minimum standards researchers have established to determine the effectiveness of an educational program or procedure. Bias is evident in the sampling of students and teachers included in studies. Bias also appears in studies funded by technology vendors. Moreover, there is far too much reliance on teacher and principal surveys and self-reports of student engagement and achievement. Finally, among those studies that claim higher test scores as a result of online instruction, few studies control for obvious factors that could explain the rise in test scores.
[x]
Slip-shod research, of course, has seldom stopped champions of online instruction from pressing policymakers to include such studies in their recommendations and use such research to persuade practitioners of the merits of virtual schooling for children and youth. Thus, poorly designed studies loaded with lethal flaws that show student gains in test scores often made media headlines for millions of readers and viewers while occasional well-designed studies that show modest or no gains turned up in academic journals read by a few hundreds researchers.
Nonetheless, policymakers have decided again and again to have more and more elementary and secondary school students in blended schools and taking online courses to solve one or more of the problems described in Part 2. Were sensible observers of the contemporary policy scene to watch top-level district, state, and federal leaders push ambitious virtual programs, these observers would note the frequent absence of convincing evidence for the sharp expansion of online instruction. These observers would easily conclude that decision-makers made policy on grounds other than research findings. They would hardly miss that when policymakers did cite research studies in making the decisions, citations would be selective and, more often than not, had justified a policy already decided upon. Why is that? Here is where I offer a third reason for the minor role that research plays in making decisions about virtual learning.
Symbolic, political, and budgetary reasons carry far more weight in making policy decisions about online instruction than research findings.
State and local school boards and superintendents adopt elements of virtual schooling because they want to be seen as technologically innovative and ahead of other districts. In this culture, the value of technology is equal to social and economic progress. Even the term “high tech”—like high fashion, high church, high class, high society—conveys a whiff of superiority relative to “low tech” methods and materials. Symbolically, high-tech is high status. Students using new technologies signals that schools are modern, up-to-date, and preparing the next generation to enter higher education or go directly into the labor market with sufficient skills and knowledge to find jobs. Being in the vanguard of innovation—schools buying iPads for every kindergarteners—signals voters, taxpayers, and parents that the district wants to raise achievement through engaging students while bringing the real-world into classrooms to prepare children and youth for an information-driven economy. Not adopting new technologies, even when funds are short, sends a clear message that district leaders are failing their students, mindlessly reinforcing traditional instruction, and neglecting grave educational problems. (xi)
For policymakers to be seen as ahead of the game in technology garners public support. Too often school critics forget that local boards of education are completely dependent upon voters for funding. Those school boards that rely upon local and state resources to raise funds for schools are politically smart, then, to buy computers, whiteboards, and expand online instruction as high-status symbols to cement community support for future tax levies and bond referenda. They are also politically smart in spending monies to adopt and implement virtual schooling because in the long-term—about a decade—it will reduce significantly the cost of schooling children and youth.
Finally, policymakers also know that business, civic, and community expect of them unceasing efforts to improve students’ academic performance through better school organization, governance, curriculum, and instruction—including the adoption of technology. Since World War II, job number one has been reform. Unrelenting reform is, in short, a policymaker strategy for political survival.
[xii]
For these three reasons, policymaker use of research studies on online instructional effectiveness matters little. The truth is that even were there more than a handful of rigorously designed studies showing strong student effects from taking online courses, such results would be used to justify after-the-fact policy decisions. Of course, such solid studies are missing from the research pantry. The fact remains that no one knows for sure for which students virtual schooling works, in what subjects, and under what conditions.
If that is the case now, it does not mean it will be so forever. Recall that I pointed out how rigorous research designs, sampling, and methodologies have produced findings over time that have accumulated into convincing caches of evidence (e.g., preschool and career-technical academies) sufficient to give policymakers a rock-solid foundation for making decisions if the political conditions and resources were favorable for such policies.
Sure, that final “if” clause is crucial but it is realistic in light of the history and practice of making and implementing school policy over the past half-century. Political, economic, and social conditions influence which reforms get identified and adopted. With the current excitement over virtual learning and blended schools unlikely to abate in the immediate future and interest in spending ever larger amounts of money on online instruction, asking decision-makers about the evidence supporting expansion of online instruction is, at the least, a question that demands answers that can be reviewed and analyzed publicly.
[i] Gene Glass, “The Realities of K-12 Virtual Education,” p. 5.
[ii] Cathy Cavanaugh, et. al. “The Effects of Distance Education on K–12 Student Outcomes: A Meta-Analysis.” 2004 Naperville, IL: Learning Point Associates ; Rosina Smith, et. al. “A Synthesis of New Research on K-12 Online Learning
”. 2005,
Naperville, IL: Learning Point Associates; Barbara Means, et. al., “Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies,” (U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, 2010).
[iii] See, for example, Charles Lindblom and David Cohen,
Usable Knowledge: Social Science and Social Problem Solving (New Haven, CT: Yale University Press, 1979); Carol Weiss, et. al., “The Fairy Godmother and Her Warts: Making the Dream of Evidence-Based Policy Come True,”
American Journal of Evaluation, 2008, 29(1) at:
http://aje.sagepub.com/cgi/content/abstract/29/1/29
[iv] For a typical example of calls for practitioners to use data-driven decision making, see: Pamela Shorr, “10 Things You Always Wanted To Know about Data-Driven Decision Making,”
Scholastic Administr@tor, September 2003, at:
http://www.scholastic.com/browse/article.jsp?id=423
[v]Michelle Davis, “States, Districts Move to Require Virtual Classes,”
Education Week, October 9, 2011; Kelsey Sheehy, “States, Districts Require Online Ed for High School Graduation,”
U.S. News, October 24, 2012 at:
http://www.usnews.com/education/blogs/high-school-notes/2012/10/24/states-districts-require-online-ed-for-high-school-graduation
[vi] Districts sometimes give up laptops. See Winnie Hu, “Seeing No Progress, Some Schools Drop Laptops,”
New York Times, May 4, 2007; Jonathan Schorr and Deborah McGriff, “Future Schools,” Education Next, 2011 at:
http://educationnext.org/future-schools/
[vii] James Heckman, “Skill Formation and the Economics of Investing in Disadvantaged Children,”
Science, Vol. 312, June 30, 2006, pp. 1900-1902; James Kemple, “Career Academies: Impact on Work and Educational Attainment,” March 2004, MDRC
[viii] Barbara Means, et. al., “Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies,” (U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, 2010), p. xiv.
[x] Yong Zhao et. al., “What Makes the Difference? A Practical Analysis of Research on the Effectiveness of Distance Education,”
Teachers College Record, 2005, 107(8), pp. 1836-1884; Means, et. al., “Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies.” For an example of vendor bias in reporting research, see: Intel White Paper: Education Transformation, “The Positive Impact of eLearning—2012 Update.” A capsule summary that contains the both the high-and-low of research on online instruction as well as studies of educational technology is on p,8:
While few rigorous experimental or controlled quasi-experimental studies on eLearning’s benefits have been published, a critical mass of evidence indicates that investments in eLearning can deliver substantial positive effects.
[xi] In the discussion on symbolism of technology in K-12 schools and in the larger culture I draw from Kathryn Henderson,
On Line and On Paper(Cambridge, MA: MIT Press, 1998, chapter 8; John Meyer and Brian Rowan, “Institutional Organizations: Formal Structure as Myth and Ceremony, in Walter Powell and Paul DiMaggio (Eds.)
The New Institutionalism in Organizational Analysis (Chicago: University of Chicago Press, 1983), pp.41-62; Jeffrey Pfeffer, “Management as Symbolic Action: The Creation and Maintenance of Organizational Paradigms,”
Research in Organizational Behavior, 1981, 3, pp. 1-52.
[xii] Frederick Hess,
Spinning Wheels: The Politics of Urban School Reform(Washington, D.C.: Brookings Institution Press, 1998).