It’s entirely my fault. I chose to subscribe to a daily email from Times Higher Education. No-one forced me. I could cancel at any time. It’s also my fault that I get snarky when they do one of these:

That’s from today’s THE email bulletin. The story to which it refers is behind a paywall, but my university subscription gives me access. If you don’t have access to the THE and your scepticism setting has been dialled back, you may find your brain registering the belief that
“teaching students online produces the same academic performance as face-to-face teaching”.
After all, that’s what you just read in the THE bulletin. If you are an inexperienced university manager, over-stretched educational policy maker, researcher for a free market think tank or EdTech entrepreneur, such a belief could prove unhelpful to the rest of us.
Turning to the story itself, in the THE, we are told:
“The results, which come at a time when universities across the world have had to shut their campuses and move to online instruction as a result of the coronavirus pandemic, found that students who were taught wholly online scored the highest on their average scores in assessments taken throughout the course. Those taught fully online scored, on average, 7.2 percentage points higher than the other two forms of learning.”
But if we look at the journal article on which this is based, we find that the researchers are quite clear about why the ‘fully online’ students did better. Unlike the students with whom they were being compared, the fully online students had three goes at each of the weekly assessment tasks. The students who were taught in ‘in-person’ or ‘blended’ modes did not have this advantage. The researchers describe this as “an artefact of the more lenient assessment submission policy for online students” (Chirikov et al., 2020, p2). In the journal article, this caveat comes immediately after the finding quoted above in the THE piece. Yet it didn’t make it into the THE or the THE’s email bulletin.
Casual readers of the email bulletin might also assume that the proposition about equivalent performance comes from a representative sample of comparisons between online and face-to-face teaching. They might be surprised to learn that the research was conducted in three Russian universities, in two courses: Engineering Mechanics and Construction Materials Technology. The online course instructors came from one of Russia’s “top engineering schools” while the instructors involved in the ‘in person’ and ‘blended’ modes worked in one of the students’ own universities. These are described in the journal article as ‘resource-constrained institutions’ and the researchers characterise the instructors in the ‘resource-constrained’ universities as having weaker educational backgrounds, fewer research publications and less teaching experience than the instructors from the ‘top engineering school’ (Chirikov et al., 2020, p2).
The THE article does not refer to any prior studies. The extensive review and meta-analysis work by Barbara Means and colleagues may as well not exist.
My point: I’m not criticising the researchers (Igor Chirikov and colleagues). They put a lot of care into this study and they are serious about improving access to STEM education opportunities. I just wonder how one of our ‘top trade papers’ (THE) can provide such bad service to its industry, and what we can do to help improve the situation.
References
Chirikov, I., Semenova, T., Maloshonok, N., Bettinger, E., & Kizilcec, R. F. (2020). Online education platforms scale college STEM instruction with equivalent learning outcomes at lower cost. Science Advances, 6(15), eaay5324. doi:10.1126/sciadv.aay5324
Means, B., Bakia, M., & Murphy, R. (2014). Learning online: what research tells us about whether, when and how. New York: Routledge.
Means, B., Toyama, Y., Murphy, R. F., & Baki, M. (2013). The effectiveness of online and blended learning: a meta-analysis of the empirical literature. Teachers College Record, 115, 1-47.