A new report called “Making Digital Learning Work” (authored by Arizona University and the Boston Consulting Group) summarizes case studies on “digital learning” efforts taking place at Arizona State University, Georgia State University, Houston Community College, the Kentucky Community and Technical College System, Rio Salado Community College, and the University of Central Florida.
Higher Ed futurist and commentator Bryan Alexander had a good take on the report. If you don’t have time to read the whole report(or just the executive summary), Bryan’s piece is more informative than what follows from me, and I’ll not duplicate his expert read, which I largely agree with.
I have 2 of my own responses:
The meanings and definitions of words and concepts in the report shift around in ways I find pretty questionable.
There’s a lot of good and potentially information here, but unfortunately, much of the information is obscured by the inconsistent use of various terms and names. The same term can be variously used to describe a student, a software product, an academic program, an individual course, or a major institutional initiative. Now, part of this confused verbiage comes from this report being comprised of case studies of 6 different schools, most of which have multiple initiatives going on at once.
Unfortunately, I’m also concerned that this confusion is being fostered deliberately in order to make positive findings about specific applications or contexts seem more all-encompassing or broader in impact.
For example: In almost every part of the report, the term mixed-modality is used to refer to an individual course that offers a mix of online and face-to-face elements. Then, midway through the report, under the heading “Mixed-modality learners experienced improved retention and graduation rates.” studies showing that students who take a mix of fully online and in-person classes sometimes do better than students taking only in-person classes are discussed. Findings like these “further reinforces the value of mixed-modality models.” In this subtle drift of the usage of the term mixed-modality from “course” to “learner,” the positive messaging around a specific usage is seemingly rhetorically stretched to all usages.
Conversely, I noted instances where positive results from large-scale or holistic initiatives are used to bolster more limited (but similarly-named) applications. I noticed this particularly around the term “adaptive,” where the successes of large-scale course and curricular re-designs (referred to as “adaptive courses”) that incorporated adaptive courseware as well as major learning space and enrollment changes were used almost-but-not-quite interchangeably with discussions of the courseware specifically.
The report views the harvesting and exploitation of student data as a near-universal good.
There’s obviously much that can be gained by exploring the data that students produce; but in a post-Cambridge Analytica world, we can’t pretend that this is a morally neutral act anymore.
I’m moved to contrast this report with a recent Chronicle of Higher Ed piece by Chris Gilliard, a professor of English at Macomb Community College: How Ed Tech Is Exploiting Students. I found Gilliard’s whole piece very convincing, and the final lines particularly powerful: “When we draft students into education technologies and enlist their labor without their consent or even their ability to choose, we enact a pedagogy of extraction and exploitation. It’s time to stop.”