Last week I joined a presentation on AI in education. During Q&A, a student asked what I thought was the most grounded question of the session: what is actually happening in K-12 schools right now? How are they responding?
It is a question I get a lot, and I find that my answer keeps getting cleaner the more I chat with schools.
Ethan Mollick’s concept of the “jagged frontier” was built to describe what AI can and cannot do, but I think it applies just as well to how schools and school systems are navigating AI. The response is uneven, messy, and genuinely interesting to watch. After working across a range of systems in the US and paying close attention to what is happening globally, I see four distinct patterns.
All-In
Some school systems and even entire countries have decided to ride the wave as a matter of national agenda. Singapore is the clearest example, treating AI integration in education as a strategic priority rather than something to manage or contain. Its EdTech Masterplan 2030 lays out a whole-nation vision for what it means when a government decides to lead rather than follow. China has taken a similar posture, although the size and complexity of its educational system make it internally jagged. South Korea went all-in early; and has since pulled back in striking fashion, which is its own fascinating case study in what happens when adoption outpaces readiness. The AI textbook rollout stalled at around 30% adoption, became politically polarized, and was ultimately reclassified as optional after less than a semester. Worth watching closely.
In the US, private systems like Alpha Schools were early movers. They have received significant criticism, though I think it is worth noting that much of the criticism is about the quality and implementation of the AI they are using rather than about whether the fundamental direction is wrong. The question of whether AI belongs in K-12 classrooms is a different question from whether this particular school is doing it well.
All-Out
Other school systems have gone the opposite direction, prohibiting AI use entirely. I understand the instinct, even when I disagree with the outcome. These systems are often responding to real and legitimate concerns, and some of them will probably shift as the pressure to adapt grows.
Tip-Toe
This is where I find most of the school systems I work with. The pattern is fairly consistent: start by giving administrators access, then move carefully toward teachers, and only then begin the much harder conversation about where and how students fit in. It is cautious, sometimes frustratingly slow, but it is also a recognizable form of institutional risk management, and a response to the fact that we actually do not know much. We’ve had 130 years of research about reading education but less than 3 about AI in education. Let’s not pretend we know more than we actually do.
Deliberate Community
This is probably my favorite model, and it is the one the European Commission has been actively promoting. Rather than having a minister, a superintendent, or even a single teacher make the call, this model asks each community to have a real conversation about what they value, what they fear, and what role they want AI to play in their children’s education. The decision gets made by the people it affects.
I want to be clear: I do not always agree with where those community conversations land. Some communities will decide things I think are wrong. But I deeply believe in the model itself, because it is a conscious decision made together rather than something that happens to a community. That distinction matters more than any single policy outcome.
What I Think All Schools Need
Following my four-tier framework for thinking about AI in education, I am genuinely encouraged by the range of experimentation happening right now. The variation across systems is not just noise. It is data. Robust experimentation, even when messy, accelerates learning across the field.
That said, I think there are three things every school system needs to attend to regardless of where they fall on this spectrum.
The first is getting tools into the hands of teachers. Not to surveil or control student use, but because teachers cannot guide students through something they have not experienced themselves. Teacher access and teacher learning have to come first.
The second is building genuine understanding of what AI is and how it interacts with human cognition. This is not about digital literacy in the old sense. It is about helping educators and students understand something genuinely new about how knowledge is generated, evaluated, and used.
The third is taking the social, emotional, and ethical dimensions seriously. This is not a soft add-on. The risks here are real, and counteracting them requires the same intentionality as any other aspect of curriculum and instruction.
The jagged frontier in schools looks chaotic from the outside. Up close, it looks like a field trying very hard to figure out the right thing to do. I think that is actually a reasonable place to be.
