by Jane Robbins, a senior fellow at American Principles Project, an attorney, and co-author of Deconstructing the Administrative State: The Fight for Liberty

Over the last several years, testing has been a major point of contention between parents and the education establishment (both federal and state). Especially as states have responded to federal mandates by administering unvalidated assessments aligned to the Common Core national standards, parents across the country have begun, with varying degrees of success, to opt their children out of those assessments. The Every Student Succeeds Act perpetuates the federal testing mandates, so the opt-out movement will continue.

But the education establishment is now colluding with Big Data to obliterate opting out. How? By promoting “embedded assessment” within the digital-learning platforms that are gradually replacing teacher-led instruction. As students interact with these sophisticated platforms, the software collects millions of data points on each child and can assess exactly what “skills” he has mastered and where he needs further training. (Modern progressive education is about skills rather than knowledge, and training rather than education.) Embedded assessment means each student’s performance will be assessed every moment, in real time, through analysis of keystrokes and perhaps even physiological reactions. Ultimately the periodic “summative assessment” — the end-of-course or end-of-year test — will disappear and parents’ ability to protect their children from the testing.

The concept of embedded assessment has a certain appeal. If students are being assessed continually, then the adaptive software can adjust to feed them whatever they need to address any problems they’re experiencing. Even some players in Big Data are acknowledging serious concerns with the concept — and parents and policymakers must understand what embedded assessment really means for children.

In a recent presentation at Princeton’s Center for Information Technology Policy, Yale University researcher and legal scholar Elana Zeide discussed the troubling implications of, as she put it, “moving from human decision-making to machine decision-making” in education. The potential problems involve threats to both student privacy and individual freedom and autonomy.

Zeide explained that adopting “personalized learning” through technology will enable creation of student portfolios at a granular level. For example, the software will record not only whether the student can calculate the correct answer on an algebra problem, but exactly how his brain is working on each step of that problem. As he progresses through school, platforms such as the creepy, mind-mapping Knewton will create “knowledge maps” to show precisely what the student knows and can do, based on every keystroke he executes and (with some programs) even on his heart rate and facial countenance as he does so. Zeide predicts those knowledge maps will eventually replace degrees and diplomas as credentials for higher education and employment.

The existence of such portfolios raises major concerns about student privacy. This data generally is not covered by the federal Family Educational Rights and Privacy Act. Zeide acknowledged that all this “portable, interoperable, instantly transferable, and durable” data constitutes an enormous temptation for companies and researchers — “you can repurpose it for all sorts of cool aggregation and mining . . . and you can discover things you never knew were there!” Do parents want corporations and others sifting through their children’s most intimate data to discover things that should remain private?

Zeide focused especially on the algorithms that the software will create using these millions of data points on each student — algorithms that will predict a student’s future behavior and performance. With digital training, all steps in the traditional educational process — observation, formative assessment (“quizzes” that measure how well the student is learning), summative assessment, and credentialing (awarding of diplomas or certificates) — are collapsed into one moment. Every keystroke, every action, no matter how tiny, will be memorialized in this algorithm, forever.

Under such a system, Zeide said, every student will be subject to constant monitoring and will earn an “algorithmic credential” based on every interaction he has ever had with the educational software. That credential could dictate what kind of higher education he qualifies for and what kind of job he gets.

What will be the psychological effect when a child knows he cannot erase anything — that everything he does, every mistake he makes, will be fed into his algorithm? Because all data is, as Zeide said, “decontextualized,” the computer won’t make adjustments for days when the child is sick or struggling for some other reason (things a teacher would know and take into account).

Consider the intimidating effect of this permanent portfolio on every student. Will the student feel pressured to conform to the consensus of opinion on a particular topic, knowing that any dissension may come back to haunt him? Or what happens if the algorithm gets it wrong? If the algorithm mislabels him in some way? Will there be an appeal process? Appeal to whom? The erroneous or misleading data is already fixed and recorded. Is human agency therefore to be eliminated?

What if the algorithmic data was neither wrong nor misleading at the time it was collected and analyzed, but the individual experienced a fundamental conversion from, say, unengaged slacker to motivated go-getter? Will automated systems immediately discard his application or resume on the basis of the now-outdated algorithm?

The problems of “predictive analytics” (decision-making based on algorithms) are being explored in many contexts — credit ratings, employment decisions, law-enforcement issues. Individuals who find themselves disadvantaged by an algorithm because of mistaken or misleading information can spend years trying to escape a hall of mirrors. When education is increasingly concerned with “equity,” the possibility that individuals will be labeled based on stereotypes cannot be ignored. A stereotype perpetuated by a supposedly unbiased algorithm rather than a human being is even more difficult to overcome.

How do we prevent these problems with education algorithms?

Perhaps the law could impose parental consent requirements. Zeide considers this idealistic, since it is unlikely parents will fully understand the nature of the problem or what they are really consenting to. Or the law could confine use of the data to “educational purposes.” But, that phrase can be expanded to allow almost anything. Or could the law ban collecting biometric data? Zeide expressed concern that this would interfere with services for special-needs students (although the law could be drafted to allow narrowly tailored uses for such students). The intrusive data that would be included in the contemplated algorithms goes well beyond purely biometric data. Drafting an effective law is possible, but difficult in light of opposition from the powerful educational-technology companies.

Zeide also discussed the frequently recommended possibility of giving each student control over his own portfolio, allowing him to remove it from the “silo” and converting it into a “data backpack” that he can use for his own goals. But would universities or potential employers demand to see the portfolio? Even if the law prohibited them from asking, would the individual’s decision not to volunteer it suggest to them that he’s hiding something?

Zeide raised the issue of algorithms’ effect on the nature and definition of education itself. The Big Data mindset may suggest that only what can be measured and recorded is worth knowing. If a student’s grasp of the messages of Macbeth can’t be recorded as a “skill” that should be included in the portfolio, does that mean this understanding is less important to his education? Progressive education schemes such as Common Core already minimize intangible understanding in favor of concrete skills that can be measured.

Zeide wandered into territory that borders on the heretical for data mavens. She raised the question whether, perhaps, “less is more” — whether we should limit use of these digital tools, or preserve the “silos” so that not all data on a student is linked and easily accessible. Or (gulp) maybe we want to eliminate the digital tools altogether.

This conclusion would be anathema to foundations such as ExcelinEd (on whose board Secretary of Education nominee Betsy DeVos served) and tech-industry-funded groups such as the Data Quality Campaign. These groups trumpet the supposed benefits of digital training and claim it can transform education. For now, their argument is winning.

Encouraged and incentivized by the federal government, and overrun by education-technology snake-oil salesmen, public schools are adopting digital training at a breakneck pace. Most decision-makers for those schools have probably never considered the serious implications of this transformation. Nor have the implications been explained to parents, who rather are assured only that digital training will create unparalleled personalized learning opportunities for their children.

We need to take a hard look at what the Big Data revolution means for the children in our schools — for their privacy and their humanity. As Big Data advances in education, parents will discover that opting out of a test isn’t enough. To protect their children, they may have to opt out of an entire system.