Sunday, March 29, 2026

The Corridor Conversation Deserves a Room of Its Own

 I just got back from Philadelphia, where I spent a few days at the SITE conference, hoping to catch the pulse of where AI in education research is heading. I had a genuinely wonderful time. The people were sharp, the conversations were warm, and there was something quietly reassuring about realizing that the researchers you respect are wrestling with the same questions keeping you up at night.

But I left with a nagging feeling I couldn’t quite shake.

The formal papers left me with a sense of “and…”. This is not because the sessions were not good; many were, and some were excellent. It was just a sense of slow progress. Usually, this is how research advances, and that is fine. But in the age of AI, I felt that it was almost indulgent.

The conference format itself may be the problem. We submit proposals months in advance about research we wrapped up even earlier. By the time we stand at the podium, we are essentially reporting from a different era. In most fields, a year-long lag between doing and sharing is an inconvenience. In AI research right now, it is equivalent to geological time. We are presenting postcards from a past that no longer exists.

The Two Speeds A split image or two-panel illustration contrasting "The Speed of AI" (a fast, dense, chaotic network of nodes and connections) with "The Speed of Academic Research" (a single, elegant, slow-moving pendulum or hourglass). In a black and white comic book style

So here is what I keep thinking about: what if we flipped the whole thing?

Keep the research sharing, but strip it down to the essential finding. A nugget, not a novella. Tell me what you learned, and your evidence, and then let’s get to work. Because the real value of getting a few hundred serious thinkers into the same building isn’t the formal presentations. It’s what happens in the hallways, waiting for the elevator, over coffee, at the margins. The corridor conversation is where the good stuff lives. Why are we so committed to keeping it out of the rooms?

An unconference model built around AI in education could do something genuinely useful. Picture it in four movements. First, we share what we are actually doing right now. Not a polished study with clean findings, but live, messy, in-progress work. The experiment still running. The instrument we’re not sure about yet. The classroom observation that hasn’t found its theoretical frame. Second, we surface emerging technical solutions and research tools while they are still warm enough to shape. Too often, by the time a new instrument reaches the field through traditional channels, half the community has already improvised its own version, and nobody knows what anybody else is measuring, or everyone is using an instrument that was quickly put together with the notion that we will fix it later, though later never comes. Third, we find the collaborators to move forward with large scale studies. Some of the most generative research partnerships I’ve seen started with someone saying “wait, you’re looking at that too?” in a hotel lobby at 10pm. SITE has created these in the past but let’s build that moment into the schedule. And fourth, we stress-test ideas before they calcify. Bring your half-formed hypothesis, your shaky design, your nagging methodological doubt, and subject it to the kind of rigorous, generous pushback that only happens when you’re in a room with people who actually care and have no incentive to be polite about bad ideas. Here’s the part that excites me most: we could even do research on site. Instrument development happening in real time, with the expertise in the room feeding directly back into the design. That’s not a conference. That’s a lab with better snacks.

But there’s a larger argument underneath all of this, and I think we need to say it plainly. If we want our research to shape the direction of AI in education rather than simply document its wake, we cannot afford to keep working in parallel silos, each of us producing careful (sometimes barely powered) studies that trickle out through journals on an eighteen-month delay while the technology rewrites the classroom underneath us. The speed of AI is not going to slow down to match the pace of peer review. So we have to build something that can move alongside it.

What I am imagining is a kind of AI in education brain trust. Not a new professional organization with dues and bylaws and a nominating committee. We have enough of those. Something leaner and more intentional. A networked group of researchers who agree to aggregate what we know in real time, share findings quickly and in plain language, and respond together when the field needs guidance. A parallel research infrastructure, less well-funded than the AI labs driving these tools, but not beholden to their interests either. Our independence is the asset. The research community knows things about learning, about classrooms, about equity, about what teachers can realistically sustain, that no product team is going to discover on its own. The problem is that knowledge is scattered across conference presentations, working papers, faculty websites, and email threads between people who happened to meet in Philadelphia. A brain trust would gather it, synthesize it, and get it into the hands of practitioners and policymakers fast enough to actually matter.

Because here is what keeps me up at night. The decisions being made right now about how AI enters classrooms, which tools get adopted, what counts as learning, what gets automated and what gets protected, those decisions are being made with or without us. The question is whether the research community shows up to the conversation early enough to influence it, or whether we arrive, as usual, with a beautifully designed retrospective study about something that already happened.

Let’s bring the corridor back into the room. And then let’s build a room that the whole field can use.

Friday, March 27, 2026

The Jagged AI Frontier in Schools

Last week I joined a presentation on AI in education. During Q&A, a student asked what I thought was the most grounded question of the session: what is actually happening in K-12 schools right now? How are they responding?

It is a question I get a lot, and I find that my answer keeps getting cleaner the more I chat with schools.

Ethan Mollick’s concept of the “jagged frontier” was built to describe what AI can and cannot do, but I think it applies just as well to how schools and school systems are navigating AI. The response is uneven, messy, and genuinely interesting to watch. After working across a range of systems in the US and paying close attention to what is happening globally, I see four distinct patterns.

All-In

Some school systems and even entire countries have decided to ride the wave as a matter of national agenda. Singapore is the clearest example, treating AI integration in education as a strategic priority rather than something to manage or contain. Its EdTech Masterplan 2030 lays out a whole-nation vision for what it means when a government decides to lead rather than follow. China has taken a similar posture, although the size and complexity of its educational system make it internally jagged. South Korea went all-in early; and has since pulled back in striking fashion, which is its own fascinating case study in what happens when adoption outpaces readiness. The AI textbook rollout stalled at around 30% adoption, became politically polarized, and was ultimately reclassified as optional after less than a semester. Worth watching closely.

In the US, private systems like Alpha Schools were early movers. They have received significant criticism, though I think it is worth noting that much of the criticism is about the quality and implementation of the AI they are using rather than about whether the fundamental direction is wrong. The question of whether AI belongs in K-12 classrooms is a different question from whether this particular school is doing it well.

All-Out

Other school systems have gone the opposite direction, prohibiting AI use entirely. I understand the instinct, even when I disagree with the outcome. These systems are often responding to real and legitimate concerns, and some of them will probably shift as the pressure to adapt grows.

Tip-Toe

This is where I find most of the school systems I work with. The pattern is fairly consistent: start by giving administrators access, then move carefully toward teachers, and only then begin the much harder conversation about where and how students fit in. It is cautious, sometimes frustratingly slow, but it is also a recognizable form of institutional risk management, and a response to the fact that we actually do not know much. We’ve had 130 years of research about reading education but less than 3 about AI in education. Let’s not pretend we know more than we actually do.

Deliberate Community

This is probably my favorite model, and it is the one the European Commission has been actively promoting. Rather than having a minister, a superintendent, or even a single teacher make the call, this model asks each community to have a real conversation about what they value, what they fear, and what role they want AI to play in their children’s education. The decision gets made by the people it affects.

I want to be clear: I do not always agree with where those community conversations land. Some communities will decide things I think are wrong. But I deeply believe in the model itself, because it is a conscious decision made together rather than something that happens to a community. That distinction matters more than any single policy outcome.

What I Think All Schools Need

Following my four-tier framework for thinking about AI in education, I am genuinely encouraged by the range of experimentation happening right now. The variation across systems is not just noise. It is data. Robust experimentation, even when messy, accelerates learning across the field.

That said, I think there are three things every school system needs to attend to regardless of where they fall on this spectrum.

The first is getting tools into the hands of teachers. Not to surveil or control student use, but because teachers cannot guide students through something they have not experienced themselves. Teacher access and teacher learning have to come first.

The second is building genuine understanding of what AI is and how it interacts with human cognition. This is not about digital literacy in the old sense. It is about helping educators and students understand something genuinely new about how knowledge is generated, evaluated, and used.

The third is taking the social, emotional, and ethical dimensions seriously. This is not a soft add-on. The risks here are real, and counteracting them requires the same intentionality as any other aspect of curriculum and instruction.

The jagged frontier in schools looks chaotic from the outside. Up close, it looks like a field trying very hard to figure out the right thing to do. I think that is actually a reasonable place to be.

Tuesday, August 19, 2025

Gotta Go

 Hi all,

If you are still interested I have moved my thinking to a different platform right now I am guyonAI on Substack:

https://guyonai.substack.com/


Where you will learn about the anxiety vacuum and other useful thoughts: