Sunday, May 10, 2026

Access, Time, and Permission: The Only AI Implementation Plan Schools Actually Need

 One of my friends told me this week that his company selected him to get an enterprise-linked access to ChatGPT. That was essentially the whole announcement. No training. No onboarding. No dedicated time to figure out what the tool could actually do for him. Just: you are one of the chosen ones, good luck.

I have seen this before. Every educator reading this has seen this before. An emerging technology is advocated for, excitement is rising and somebody decides to be innovative… and get stuff (devices, licences, keynote).

A decade ago, the Los Angeles Unified School District handed out iPads to hundreds of thousands of students with enormous fanfare and enormous hope. The technology was real. The potential was real (my many YouTube episodes on iPad in the Classroom show that I thought so). What was missing was everything else. Within months, students were bypassing security filters, devices were sitting in carts, and the program became a cautionary tale about what happens when you mistake access for implementation. A multi-million-dollar lesson in the difference between dropping technology into schools and actually integrating it into teaching and learning.

Schools and districts tend to make one of two mistakes with new tools. The first is the drop-and-hope approach: put it in teachers’ hands, trust the magic, and assume the early adopters will figure it out while everyone else quietly waits for the whole thing to blow over. Some teachers do flourish this way. They always have. But building a strategy around the people who would thrive under almost any conditions is not a strategy. It is luck wearing a professional development badge.

The second mistake is the overcorrection. Spooked by the chaos of the first approach, a nervous school board meeting, or a headline about students using AI to cheat, administrators reach for control. Approved use cases. Acceptable use policies that arrive before anyone has had a chance to discover what the tool is actually good for. Committees to review whether a teacher can use a particular prompt. I sat on enough meetings to understand the instinct. I do not think it works.

Top-down directives have their place, but they tend to work against the specific character of open-ended generative AI, which is a technology that reveals its value through exploration, through iteration, through the kind of messy experimentation that does not fit neatly into a compliance framework. When you over-control it, you do not reduce the risk. You just guarantee it will not add value. The value has to be discovered, while directives should just serve as guardrails. A guardrail like Don’t use a private license is good, but if you overprescribe (here is a list of 5 approved prompts), then the magic of AI solutions will not emerge.

There is a related trap worth naming. We tend to hold new tools to a standard we never applied to the ones they are replacing. A teacher who uses AI to generate a first draft of a differentiated worksheet does not need that draft to be perfect. She needs it to be better than starting from scratch at nine o'clock on a Tuesday night. The relevant question is never "is this flawless?" It is "does this create more value than what I was doing before?" A xeroxed (I love seeing this word in print) worksheet from 1987 never got interrogated for its limitations. A handwritten comment on a student essay full of the same four pieces of feedback never triggered a committee review. But ask a teacher to try an AI tool and suddenly the bar is perfection, or close enough to it that any error becomes evidence the whole enterprise was a mistake. That is not a standard. That is a way of protecting the status quo by demanding that anything new better be perfect before it earns the right to exist. Experimentation requires permission to be good enough before it gets to be great.

What actually works is harder to mandate but not hard to describe. Teachers need protected time to try things and get them wrong, ideally alongside colleagues who are doing the same. The teacher down the hall who figured out how to use AI to generate differentiated discussion questions for her mixed-ability class is more valuable than any vendor demo, and she will share what she learned over lunch if you give her half a chance. That kind of peer-to-peer learning does not happen by accident. It needs structure, and it needs time carved out rather than squeezed in.

Teachers also need genuine permission. Not the kind that comes with a wink and an asterisk. Not “feel free to explore as long as nothing goes wrong and no parents call.” Real permission, backed by administrators who are willing to say publicly that experimentation is part of professional practice and that not every attempt needs to produce a polished outcome on the first try. That kind of permission is rarer than it should be.

And yes, resources matter. Devices and subscriptions, yes, but also the human infrastructure around them. If a district brings in outside expertise, it needs to be the kind that stays. Not a keynote, not a one-day workshop, not a framework delivered from a podium to a gymnasium full of teachers who have four other things on their minds. The support that actually changes practice is boots-on-the-ground, working alongside teachers in their specific contexts over a sustained period, helping them find the nooks and crannies where AI genuinely makes their work better rather than just different.

The formula is not complicated. To discover what a genuinely open-ended technology can do inside schools, you need three things: access, time, and permission. All three, together, sustained long enough for something real to develop.

Some will figure out something useful on their own. Teachers are resourceful. But teachers deserve better than resourcefulness as the plan. They deserve the conditions that make genuine professional learning possible.