Sunday, May 10, 2026

Access, Time, and Permission: The Only AI Implementation Plan Schools Actually Need

 One of my friends told me this week that his company selected him to get an enterprise-linked access to ChatGPT. That was essentially the whole announcement. No training. No onboarding. No dedicated time to figure out what the tool could actually do for him. Just: you are one of the chosen ones, good luck.

I have seen this before. Every educator reading this has seen this before. An emerging technology is advocated for, excitement is rising and somebody decides to be innovative… and get stuff (devices, licences, keynote).

A decade ago, the Los Angeles Unified School District handed out iPads to hundreds of thousands of students with enormous fanfare and enormous hope. The technology was real. The potential was real (my many YouTube episodes on iPad in the Classroom show that I thought so). What was missing was everything else. Within months, students were bypassing security filters, devices were sitting in carts, and the program became a cautionary tale about what happens when you mistake access for implementation. A multi-million-dollar lesson in the difference between dropping technology into schools and actually integrating it into teaching and learning.

Schools and districts tend to make one of two mistakes with new tools. The first is the drop-and-hope approach: put it in teachers’ hands, trust the magic, and assume the early adopters will figure it out while everyone else quietly waits for the whole thing to blow over. Some teachers do flourish this way. They always have. But building a strategy around the people who would thrive under almost any conditions is not a strategy. It is luck wearing a professional development badge.

The second mistake is the overcorrection. Spooked by the chaos of the first approach, a nervous school board meeting, or a headline about students using AI to cheat, administrators reach for control. Approved use cases. Acceptable use policies that arrive before anyone has had a chance to discover what the tool is actually good for. Committees to review whether a teacher can use a particular prompt. I sat on enough meetings to understand the instinct. I do not think it works.

Top-down directives have their place, but they tend to work against the specific character of open-ended generative AI, which is a technology that reveals its value through exploration, through iteration, through the kind of messy experimentation that does not fit neatly into a compliance framework. When you over-control it, you do not reduce the risk. You just guarantee it will not add value. The value has to be discovered, while directives should just serve as guardrails. A guardrail like Don’t use a private license is good, but if you overprescribe (here is a list of 5 approved prompts), then the magic of AI solutions will not emerge.

There is a related trap worth naming. We tend to hold new tools to a standard we never applied to the ones they are replacing. A teacher who uses AI to generate a first draft of a differentiated worksheet does not need that draft to be perfect. She needs it to be better than starting from scratch at nine o'clock on a Tuesday night. The relevant question is never "is this flawless?" It is "does this create more value than what I was doing before?" A xeroxed (I love seeing this word in print) worksheet from 1987 never got interrogated for its limitations. A handwritten comment on a student essay full of the same four pieces of feedback never triggered a committee review. But ask a teacher to try an AI tool and suddenly the bar is perfection, or close enough to it that any error becomes evidence the whole enterprise was a mistake. That is not a standard. That is a way of protecting the status quo by demanding that anything new better be perfect before it earns the right to exist. Experimentation requires permission to be good enough before it gets to be great.

What actually works is harder to mandate but not hard to describe. Teachers need protected time to try things and get them wrong, ideally alongside colleagues who are doing the same. The teacher down the hall who figured out how to use AI to generate differentiated discussion questions for her mixed-ability class is more valuable than any vendor demo, and she will share what she learned over lunch if you give her half a chance. That kind of peer-to-peer learning does not happen by accident. It needs structure, and it needs time carved out rather than squeezed in.

Teachers also need genuine permission. Not the kind that comes with a wink and an asterisk. Not “feel free to explore as long as nothing goes wrong and no parents call.” Real permission, backed by administrators who are willing to say publicly that experimentation is part of professional practice and that not every attempt needs to produce a polished outcome on the first try. That kind of permission is rarer than it should be.

And yes, resources matter. Devices and subscriptions, yes, but also the human infrastructure around them. If a district brings in outside expertise, it needs to be the kind that stays. Not a keynote, not a one-day workshop, not a framework delivered from a podium to a gymnasium full of teachers who have four other things on their minds. The support that actually changes practice is boots-on-the-ground, working alongside teachers in their specific contexts over a sustained period, helping them find the nooks and crannies where AI genuinely makes their work better rather than just different.

The formula is not complicated. To discover what a genuinely open-ended technology can do inside schools, you need three things: access, time, and permission. All three, together, sustained long enough for something real to develop.

Some will figure out something useful on their own. Teachers are resourceful. But teachers deserve better than resourcefulness as the plan. They deserve the conditions that make genuine professional learning possible.

Monday, April 6, 2026

AI, Privacy, and the Context Conundrum

Something interesting happened recently in a conversation with Claude. I had been using a series of prompts recommended by Daniel Pink to do a kind of personal audit, and based on those conversations, I made some genuine changes. But I also noticed something that gave me pause.


Claude concluded that I was spending way too much time on administrative tasks and not enough on creative and research work. And while there is probably a kernel of truth in that, it was not quite right. The reality is that I lean heavily on AI for administrative tasks, and far less so for research and creative work, where most of my thinking happens in conversation with colleagues, on walks, or just away from the screen. Claude cannot see that work. What it can see is how I use Claude.


In other words, Claude was making inferences about my whole professional life based on how I have been using Claude. It reminded me of something I tell my students: they assume that because I teach, most of my time must go to teaching. In reality, it is about 40%. The AI was making the same natural, but limited, assumption. It was seeing the visible part of an iceberg and mapping the whole thing.
That was a useful insight on its own. But it pointed somewhere more interesting.


I recently listened to a discussion on the AI in Education podcast about bias in AI grading systems. One recommendation was straightforward: reduce the contextual information you give the AI about students. Remove names, gender, ethnicity. Strip away the signals that could activate bias. The less context, the less opportunity for those patterns to distort the evaluation.


That logic applies to me, too. The less context Claude has about me, the less it can stereotype or misread my work patterns. But here is where the conundrum arrives.


Context is precisely what makes AI more helpful.


Take a concrete example. Let me say, hypothetically, that I have a medical condition that makes me significantly less effective between 3 and 5 PM. If I want AI to help me plan my work week strategically, knowing that fact would make a real difference. It could help me schedule demanding intellectual work for the morning and reserve lighter tasks for those two hours. Without that context, I am just getting generic planning advice.But the moment I share that, I have handed a piece of genuinely private health information to an AI system, and by extension, to the company behind it. I may have no idea how that data is used, stored, or surfaced in future interactions. I have optimized for utility at the cost of privacy.
This is the lesson we already learned the hard way with social media. Early location-sharing felt like a fun, low-stakes way to connect. Foursquare check-ins were charming until they weren’t. The lure of personalization is real. The cost is often invisible until it isn’t. We traded something for convenience, and many of us are still sorting out what exactly we gave away.

For our own data, adults get to make that call. It is a tradeoff, and reasonable people will land in different places depending on their values, their risk tolerance, and how much they trust the platforms they use.


But student data is not ours to trade.

This is where I want to be unequivocal. The legal frameworks around student data, FERPA in the United States among them, exist for good reasons. Student data belongs to students and their families. When we use AI tools in educational settings, we are not making personal decisions about our own information. We are making decisions about children and young people who have not consented, who may not fully understand the implications, and who deserve protection.

So the practical guidance here is not subtle. Use only systems that are legally and contractually committed to protecting student data. Minimize the information you expose, even when a tool feels helpful. Resist the temptation of a quick AI fix that requires feeding it student names, identifiers, or demographic information.

The conundrum for adults using AI tools is real and worth sitting with. The tradeoff between context and privacy is genuinely complex.
For students, it is not a conundrum at all. It’s a responsibility.

Sunday, March 29, 2026

The Corridor Conversation Deserves a Room of Its Own

 I just got back from Philadelphia, where I spent a few days at the SITE conference, hoping to catch the pulse of where AI in education research is heading. I had a genuinely wonderful time. The people were sharp, the conversations were warm, and there was something quietly reassuring about realizing that the researchers you respect are wrestling with the same questions keeping you up at night.

But I left with a nagging feeling I couldn’t quite shake.

The formal papers left me with a sense of “and…”. This is not because the sessions were not good; many were, and some were excellent. It was just a sense of slow progress. Usually, this is how research advances, and that is fine. But in the age of AI, I felt that it was almost indulgent.

The conference format itself may be the problem. We submit proposals months in advance about research we wrapped up even earlier. By the time we stand at the podium, we are essentially reporting from a different era. In most fields, a year-long lag between doing and sharing is an inconvenience. In AI research right now, it is equivalent to geological time. We are presenting postcards from a past that no longer exists.

The Two Speeds A split image or two-panel illustration contrasting "The Speed of AI" (a fast, dense, chaotic network of nodes and connections) with "The Speed of Academic Research" (a single, elegant, slow-moving pendulum or hourglass). In a black and white comic book style

So here is what I keep thinking about: what if we flipped the whole thing?

Keep the research sharing, but strip it down to the essential finding. A nugget, not a novella. Tell me what you learned, and your evidence, and then let’s get to work. Because the real value of getting a few hundred serious thinkers into the same building isn’t the formal presentations. It’s what happens in the hallways, waiting for the elevator, over coffee, at the margins. The corridor conversation is where the good stuff lives. Why are we so committed to keeping it out of the rooms?

An unconference model built around AI in education could do something genuinely useful. Picture it in four movements. First, we share what we are actually doing right now. Not a polished study with clean findings, but live, messy, in-progress work. The experiment still running. The instrument we’re not sure about yet. The classroom observation that hasn’t found its theoretical frame. Second, we surface emerging technical solutions and research tools while they are still warm enough to shape. Too often, by the time a new instrument reaches the field through traditional channels, half the community has already improvised its own version, and nobody knows what anybody else is measuring, or everyone is using an instrument that was quickly put together with the notion that we will fix it later, though later never comes. Third, we find the collaborators to move forward with large scale studies. Some of the most generative research partnerships I’ve seen started with someone saying “wait, you’re looking at that too?” in a hotel lobby at 10pm. SITE has created these in the past but let’s build that moment into the schedule. And fourth, we stress-test ideas before they calcify. Bring your half-formed hypothesis, your shaky design, your nagging methodological doubt, and subject it to the kind of rigorous, generous pushback that only happens when you’re in a room with people who actually care and have no incentive to be polite about bad ideas. Here’s the part that excites me most: we could even do research on site. Instrument development happening in real time, with the expertise in the room feeding directly back into the design. That’s not a conference. That’s a lab with better snacks.

But there’s a larger argument underneath all of this, and I think we need to say it plainly. If we want our research to shape the direction of AI in education rather than simply document its wake, we cannot afford to keep working in parallel silos, each of us producing careful (sometimes barely powered) studies that trickle out through journals on an eighteen-month delay while the technology rewrites the classroom underneath us. The speed of AI is not going to slow down to match the pace of peer review. So we have to build something that can move alongside it.

What I am imagining is a kind of AI in education brain trust. Not a new professional organization with dues and bylaws and a nominating committee. We have enough of those. Something leaner and more intentional. A networked group of researchers who agree to aggregate what we know in real time, share findings quickly and in plain language, and respond together when the field needs guidance. A parallel research infrastructure, less well-funded than the AI labs driving these tools, but not beholden to their interests either. Our independence is the asset. The research community knows things about learning, about classrooms, about equity, about what teachers can realistically sustain, that no product team is going to discover on its own. The problem is that knowledge is scattered across conference presentations, working papers, faculty websites, and email threads between people who happened to meet in Philadelphia. A brain trust would gather it, synthesize it, and get it into the hands of practitioners and policymakers fast enough to actually matter.

Because here is what keeps me up at night. The decisions being made right now about how AI enters classrooms, which tools get adopted, what counts as learning, what gets automated and what gets protected, those decisions are being made with or without us. The question is whether the research community shows up to the conversation early enough to influence it, or whether we arrive, as usual, with a beautifully designed retrospective study about something that already happened.

Let’s bring the corridor back into the room. And then let’s build a room that the whole field can use.

Friday, March 27, 2026

The Jagged AI Frontier in Schools

Last week I joined a presentation on AI in education. During Q&A, a student asked what I thought was the most grounded question of the session: what is actually happening in K-12 schools right now? How are they responding?

It is a question I get a lot, and I find that my answer keeps getting cleaner the more I chat with schools.

Ethan Mollick’s concept of the “jagged frontier” was built to describe what AI can and cannot do, but I think it applies just as well to how schools and school systems are navigating AI. The response is uneven, messy, and genuinely interesting to watch. After working across a range of systems in the US and paying close attention to what is happening globally, I see four distinct patterns.

All-In

Some school systems and even entire countries have decided to ride the wave as a matter of national agenda. Singapore is the clearest example, treating AI integration in education as a strategic priority rather than something to manage or contain. Its EdTech Masterplan 2030 lays out a whole-nation vision for what it means when a government decides to lead rather than follow. China has taken a similar posture, although the size and complexity of its educational system make it internally jagged. South Korea went all-in early; and has since pulled back in striking fashion, which is its own fascinating case study in what happens when adoption outpaces readiness. The AI textbook rollout stalled at around 30% adoption, became politically polarized, and was ultimately reclassified as optional after less than a semester. Worth watching closely.

In the US, private systems like Alpha Schools were early movers. They have received significant criticism, though I think it is worth noting that much of the criticism is about the quality and implementation of the AI they are using rather than about whether the fundamental direction is wrong. The question of whether AI belongs in K-12 classrooms is a different question from whether this particular school is doing it well.

All-Out

Other school systems have gone the opposite direction, prohibiting AI use entirely. I understand the instinct, even when I disagree with the outcome. These systems are often responding to real and legitimate concerns, and some of them will probably shift as the pressure to adapt grows.

Tip-Toe

This is where I find most of the school systems I work with. The pattern is fairly consistent: start by giving administrators access, then move carefully toward teachers, and only then begin the much harder conversation about where and how students fit in. It is cautious, sometimes frustratingly slow, but it is also a recognizable form of institutional risk management, and a response to the fact that we actually do not know much. We’ve had 130 years of research about reading education but less than 3 about AI in education. Let’s not pretend we know more than we actually do.

Deliberate Community

This is probably my favorite model, and it is the one the European Commission has been actively promoting. Rather than having a minister, a superintendent, or even a single teacher make the call, this model asks each community to have a real conversation about what they value, what they fear, and what role they want AI to play in their children’s education. The decision gets made by the people it affects.

I want to be clear: I do not always agree with where those community conversations land. Some communities will decide things I think are wrong. But I deeply believe in the model itself, because it is a conscious decision made together rather than something that happens to a community. That distinction matters more than any single policy outcome.

What I Think All Schools Need

Following my four-tier framework for thinking about AI in education, I am genuinely encouraged by the range of experimentation happening right now. The variation across systems is not just noise. It is data. Robust experimentation, even when messy, accelerates learning across the field.

That said, I think there are three things every school system needs to attend to regardless of where they fall on this spectrum.

The first is getting tools into the hands of teachers. Not to surveil or control student use, but because teachers cannot guide students through something they have not experienced themselves. Teacher access and teacher learning have to come first.

The second is building genuine understanding of what AI is and how it interacts with human cognition. This is not about digital literacy in the old sense. It is about helping educators and students understand something genuinely new about how knowledge is generated, evaluated, and used.

The third is taking the social, emotional, and ethical dimensions seriously. This is not a soft add-on. The risks here are real, and counteracting them requires the same intentionality as any other aspect of curriculum and instruction.

The jagged frontier in schools looks chaotic from the outside. Up close, it looks like a field trying very hard to figure out the right thing to do. I think that is actually a reasonable place to be.