I just got back from Philadelphia, where I spent a few days at the SITE conference, hoping to catch the pulse of where AI in education research is heading. I had a genuinely wonderful time. The people were sharp, the conversations were warm, and there was something quietly reassuring about realizing that the researchers you respect are wrestling with the same questions keeping you up at night.
But I left with a nagging feeling I couldn’t quite shake.
The formal papers left me with a sense of “and…”. This is not because the sessions were not good; many were, and some were excellent. It was just a sense of slow progress. Usually, this is how research advances, and that is fine. But in the age of AI, I felt that it was almost indulgent.
The conference format itself may be the problem. We submit proposals months in advance about research we wrapped up even earlier. By the time we stand at the podium, we are essentially reporting from a different era. In most fields, a year-long lag between doing and sharing is an inconvenience. In AI research right now, it is equivalent to geological time. We are presenting postcards from a past that no longer exists.
So here is what I keep thinking about: what if we flipped the whole thing?
Keep the research sharing, but strip it down to the essential finding. A nugget, not a novella. Tell me what you learned, and your evidence, and then let’s get to work. Because the real value of getting a few hundred serious thinkers into the same building isn’t the formal presentations. It’s what happens in the hallways, waiting for the elevator, over coffee, at the margins. The corridor conversation is where the good stuff lives. Why are we so committed to keeping it out of the rooms?
An unconference model built around AI in education could do something genuinely useful. Picture it in four movements. First, we share what we are actually doing right now. Not a polished study with clean findings, but live, messy, in-progress work. The experiment still running. The instrument we’re not sure about yet. The classroom observation that hasn’t found its theoretical frame. Second, we surface emerging technical solutions and research tools while they are still warm enough to shape. Too often, by the time a new instrument reaches the field through traditional channels, half the community has already improvised its own version, and nobody knows what anybody else is measuring, or everyone is using an instrument that was quickly put together with the notion that we will fix it later, though later never comes. Third, we find the collaborators to move forward with large scale studies. Some of the most generative research partnerships I’ve seen started with someone saying “wait, you’re looking at that too?” in a hotel lobby at 10pm. SITE has created these in the past but let’s build that moment into the schedule. And fourth, we stress-test ideas before they calcify. Bring your half-formed hypothesis, your shaky design, your nagging methodological doubt, and subject it to the kind of rigorous, generous pushback that only happens when you’re in a room with people who actually care and have no incentive to be polite about bad ideas. Here’s the part that excites me most: we could even do research on site. Instrument development happening in real time, with the expertise in the room feeding directly back into the design. That’s not a conference. That’s a lab with better snacks.
But there’s a larger argument underneath all of this, and I think we need to say it plainly. If we want our research to shape the direction of AI in education rather than simply document its wake, we cannot afford to keep working in parallel silos, each of us producing careful (sometimes barely powered) studies that trickle out through journals on an eighteen-month delay while the technology rewrites the classroom underneath us. The speed of AI is not going to slow down to match the pace of peer review. So we have to build something that can move alongside it.
What I am imagining is a kind of AI in education brain trust. Not a new professional organization with dues and bylaws and a nominating committee. We have enough of those. Something leaner and more intentional. A networked group of researchers who agree to aggregate what we know in real time, share findings quickly and in plain language, and respond together when the field needs guidance. A parallel research infrastructure, less well-funded than the AI labs driving these tools, but not beholden to their interests either. Our independence is the asset. The research community knows things about learning, about classrooms, about equity, about what teachers can realistically sustain, that no product team is going to discover on its own. The problem is that knowledge is scattered across conference presentations, working papers, faculty websites, and email threads between people who happened to meet in Philadelphia. A brain trust would gather it, synthesize it, and get it into the hands of practitioners and policymakers fast enough to actually matter.
Because here is what keeps me up at night. The decisions being made right now about how AI enters classrooms, which tools get adopted, what counts as learning, what gets automated and what gets protected, those decisions are being made with or without us. The question is whether the research community shows up to the conversation early enough to influence it, or whether we arrive, as usual, with a beautifully designed retrospective study about something that already happened.
Let’s bring the corridor back into the room. And then let’s build a room that the whole field can use.

No comments:
Post a Comment