Monday, April 6, 2026

AI, Privacy, and the Context Conundrum

Something interesting happened recently in a conversation with Claude. I had been using a series of prompts recommended by Daniel Pink to do a kind of personal audit, and based on those conversations, I made some genuine changes. But I also noticed something that gave me pause.


Claude concluded that I was spending way too much time on administrative tasks and not enough on creative and research work. And while there is probably a kernel of truth in that, it was not quite right. The reality is that I lean heavily on AI for administrative tasks, and far less so for research and creative work, where most of my thinking happens in conversation with colleagues, on walks, or just away from the screen. Claude cannot see that work. What it can see is how I use Claude.


In other words, Claude was making inferences about my whole professional life based on how I have been using Claude. It reminded me of something I tell my students: they assume that because I teach, most of my time must go to teaching. In reality, it is about 40%. The AI was making the same natural, but limited, assumption. It was seeing the visible part of an iceberg and mapping the whole thing.
That was a useful insight on its own. But it pointed somewhere more interesting.


I recently listened to a discussion on the AI in Education podcast about bias in AI grading systems. One recommendation was straightforward: reduce the contextual information you give the AI about students. Remove names, gender, ethnicity. Strip away the signals that could activate bias. The less context, the less opportunity for those patterns to distort the evaluation.


That logic applies to me, too. The less context Claude has about me, the less it can stereotype or misread my work patterns. But here is where the conundrum arrives.


Context is precisely what makes AI more helpful.


Take a concrete example. Let me say, hypothetically, that I have a medical condition that makes me significantly less effective between 3 and 5 PM. If I want AI to help me plan my work week strategically, knowing that fact would make a real difference. It could help me schedule demanding intellectual work for the morning and reserve lighter tasks for those two hours. Without that context, I am just getting generic planning advice.But the moment I share that, I have handed a piece of genuinely private health information to an AI system, and by extension, to the company behind it. I may have no idea how that data is used, stored, or surfaced in future interactions. I have optimized for utility at the cost of privacy.
This is the lesson we already learned the hard way with social media. Early location-sharing felt like a fun, low-stakes way to connect. Foursquare check-ins were charming until they weren’t. The lure of personalization is real. The cost is often invisible until it isn’t. We traded something for convenience, and many of us are still sorting out what exactly we gave away.

For our own data, adults get to make that call. It is a tradeoff, and reasonable people will land in different places depending on their values, their risk tolerance, and how much they trust the platforms they use.


But student data is not ours to trade.

This is where I want to be unequivocal. The legal frameworks around student data, FERPA in the United States among them, exist for good reasons. Student data belongs to students and their families. When we use AI tools in educational settings, we are not making personal decisions about our own information. We are making decisions about children and young people who have not consented, who may not fully understand the implications, and who deserve protection.

So the practical guidance here is not subtle. Use only systems that are legally and contractually committed to protecting student data. Minimize the information you expose, even when a tool feels helpful. Resist the temptation of a quick AI fix that requires feeding it student names, identifiers, or demographic information.

The conundrum for adults using AI tools is real and worth sitting with. The tradeoff between context and privacy is genuinely complex.
For students, it is not a conundrum at all. It’s a responsibility.

Sunday, March 29, 2026

The Corridor Conversation Deserves a Room of Its Own

 I just got back from Philadelphia, where I spent a few days at the SITE conference, hoping to catch the pulse of where AI in education research is heading. I had a genuinely wonderful time. The people were sharp, the conversations were warm, and there was something quietly reassuring about realizing that the researchers you respect are wrestling with the same questions keeping you up at night.

But I left with a nagging feeling I couldn’t quite shake.

The formal papers left me with a sense of “and…”. This is not because the sessions were not good; many were, and some were excellent. It was just a sense of slow progress. Usually, this is how research advances, and that is fine. But in the age of AI, I felt that it was almost indulgent.

The conference format itself may be the problem. We submit proposals months in advance about research we wrapped up even earlier. By the time we stand at the podium, we are essentially reporting from a different era. In most fields, a year-long lag between doing and sharing is an inconvenience. In AI research right now, it is equivalent to geological time. We are presenting postcards from a past that no longer exists.

The Two Speeds A split image or two-panel illustration contrasting "The Speed of AI" (a fast, dense, chaotic network of nodes and connections) with "The Speed of Academic Research" (a single, elegant, slow-moving pendulum or hourglass). In a black and white comic book style

So here is what I keep thinking about: what if we flipped the whole thing?

Keep the research sharing, but strip it down to the essential finding. A nugget, not a novella. Tell me what you learned, and your evidence, and then let’s get to work. Because the real value of getting a few hundred serious thinkers into the same building isn’t the formal presentations. It’s what happens in the hallways, waiting for the elevator, over coffee, at the margins. The corridor conversation is where the good stuff lives. Why are we so committed to keeping it out of the rooms?

An unconference model built around AI in education could do something genuinely useful. Picture it in four movements. First, we share what we are actually doing right now. Not a polished study with clean findings, but live, messy, in-progress work. The experiment still running. The instrument we’re not sure about yet. The classroom observation that hasn’t found its theoretical frame. Second, we surface emerging technical solutions and research tools while they are still warm enough to shape. Too often, by the time a new instrument reaches the field through traditional channels, half the community has already improvised its own version, and nobody knows what anybody else is measuring, or everyone is using an instrument that was quickly put together with the notion that we will fix it later, though later never comes. Third, we find the collaborators to move forward with large scale studies. Some of the most generative research partnerships I’ve seen started with someone saying “wait, you’re looking at that too?” in a hotel lobby at 10pm. SITE has created these in the past but let’s build that moment into the schedule. And fourth, we stress-test ideas before they calcify. Bring your half-formed hypothesis, your shaky design, your nagging methodological doubt, and subject it to the kind of rigorous, generous pushback that only happens when you’re in a room with people who actually care and have no incentive to be polite about bad ideas. Here’s the part that excites me most: we could even do research on site. Instrument development happening in real time, with the expertise in the room feeding directly back into the design. That’s not a conference. That’s a lab with better snacks.

But there’s a larger argument underneath all of this, and I think we need to say it plainly. If we want our research to shape the direction of AI in education rather than simply document its wake, we cannot afford to keep working in parallel silos, each of us producing careful (sometimes barely powered) studies that trickle out through journals on an eighteen-month delay while the technology rewrites the classroom underneath us. The speed of AI is not going to slow down to match the pace of peer review. So we have to build something that can move alongside it.

What I am imagining is a kind of AI in education brain trust. Not a new professional organization with dues and bylaws and a nominating committee. We have enough of those. Something leaner and more intentional. A networked group of researchers who agree to aggregate what we know in real time, share findings quickly and in plain language, and respond together when the field needs guidance. A parallel research infrastructure, less well-funded than the AI labs driving these tools, but not beholden to their interests either. Our independence is the asset. The research community knows things about learning, about classrooms, about equity, about what teachers can realistically sustain, that no product team is going to discover on its own. The problem is that knowledge is scattered across conference presentations, working papers, faculty websites, and email threads between people who happened to meet in Philadelphia. A brain trust would gather it, synthesize it, and get it into the hands of practitioners and policymakers fast enough to actually matter.

Because here is what keeps me up at night. The decisions being made right now about how AI enters classrooms, which tools get adopted, what counts as learning, what gets automated and what gets protected, those decisions are being made with or without us. The question is whether the research community shows up to the conversation early enough to influence it, or whether we arrive, as usual, with a beautifully designed retrospective study about something that already happened.

Let’s bring the corridor back into the room. And then let’s build a room that the whole field can use.

Friday, March 27, 2026

The Jagged AI Frontier in Schools

Last week I joined a presentation on AI in education. During Q&A, a student asked what I thought was the most grounded question of the session: what is actually happening in K-12 schools right now? How are they responding?

It is a question I get a lot, and I find that my answer keeps getting cleaner the more I chat with schools.

Ethan Mollick’s concept of the “jagged frontier” was built to describe what AI can and cannot do, but I think it applies just as well to how schools and school systems are navigating AI. The response is uneven, messy, and genuinely interesting to watch. After working across a range of systems in the US and paying close attention to what is happening globally, I see four distinct patterns.

All-In

Some school systems and even entire countries have decided to ride the wave as a matter of national agenda. Singapore is the clearest example, treating AI integration in education as a strategic priority rather than something to manage or contain. Its EdTech Masterplan 2030 lays out a whole-nation vision for what it means when a government decides to lead rather than follow. China has taken a similar posture, although the size and complexity of its educational system make it internally jagged. South Korea went all-in early; and has since pulled back in striking fashion, which is its own fascinating case study in what happens when adoption outpaces readiness. The AI textbook rollout stalled at around 30% adoption, became politically polarized, and was ultimately reclassified as optional after less than a semester. Worth watching closely.

In the US, private systems like Alpha Schools were early movers. They have received significant criticism, though I think it is worth noting that much of the criticism is about the quality and implementation of the AI they are using rather than about whether the fundamental direction is wrong. The question of whether AI belongs in K-12 classrooms is a different question from whether this particular school is doing it well.

All-Out

Other school systems have gone the opposite direction, prohibiting AI use entirely. I understand the instinct, even when I disagree with the outcome. These systems are often responding to real and legitimate concerns, and some of them will probably shift as the pressure to adapt grows.

Tip-Toe

This is where I find most of the school systems I work with. The pattern is fairly consistent: start by giving administrators access, then move carefully toward teachers, and only then begin the much harder conversation about where and how students fit in. It is cautious, sometimes frustratingly slow, but it is also a recognizable form of institutional risk management, and a response to the fact that we actually do not know much. We’ve had 130 years of research about reading education but less than 3 about AI in education. Let’s not pretend we know more than we actually do.

Deliberate Community

This is probably my favorite model, and it is the one the European Commission has been actively promoting. Rather than having a minister, a superintendent, or even a single teacher make the call, this model asks each community to have a real conversation about what they value, what they fear, and what role they want AI to play in their children’s education. The decision gets made by the people it affects.

I want to be clear: I do not always agree with where those community conversations land. Some communities will decide things I think are wrong. But I deeply believe in the model itself, because it is a conscious decision made together rather than something that happens to a community. That distinction matters more than any single policy outcome.

What I Think All Schools Need

Following my four-tier framework for thinking about AI in education, I am genuinely encouraged by the range of experimentation happening right now. The variation across systems is not just noise. It is data. Robust experimentation, even when messy, accelerates learning across the field.

That said, I think there are three things every school system needs to attend to regardless of where they fall on this spectrum.

The first is getting tools into the hands of teachers. Not to surveil or control student use, but because teachers cannot guide students through something they have not experienced themselves. Teacher access and teacher learning have to come first.

The second is building genuine understanding of what AI is and how it interacts with human cognition. This is not about digital literacy in the old sense. It is about helping educators and students understand something genuinely new about how knowledge is generated, evaluated, and used.

The third is taking the social, emotional, and ethical dimensions seriously. This is not a soft add-on. The risks here are real, and counteracting them requires the same intentionality as any other aspect of curriculum and instruction.

The jagged frontier in schools looks chaotic from the outside. Up close, it looks like a field trying very hard to figure out the right thing to do. I think that is actually a reasonable place to be.

Tuesday, August 19, 2025

Gotta Go

 Hi all,

If you are still interested I have moved my thinking to a different platform right now I am guyonAI on Substack:

https://guyonai.substack.com/


Where you will learn about the anxiety vacuum and other useful thoughts:



Thursday, April 25, 2024

Exploring Generative AI in Teacher Preparation Call for proposals

 Title/Theme: Exploring Generative AI in Teacher Preparation

The Challenge 

Generative AI is rapidly becoming commonplace and coupled with the availability of personal devices and one-to-one technology adoption, we need to ensure that the current and future generations of teachers understand its implications, know how to adjust their pedagogy and how to use it to assist in lesson planning, assessment, and individualizing instruction. In this call, we are specifically inviting submissions from practitioners using evidence-based strategies in both pre-service and in-service teacher education. 

Submissions might focus on (but are not limited to): 

  • Personalized Learning 
  • Intelligent Tutoring Systems 
  • Automated Grading 
  • Data Analysis and Insights 
  • AI-driven Simulation and Virtual Reality in Teacher Education 
  • Feedback on teacher performance 
  • Lesson and assessment planning 
  • Inclusion and accessibility 
  • Chatbots in Learning and self-regulation 
  • Bots for socio-emotional learning 
  • Adaptive learning 
  • AI literacy for teacher educators 
  • What do teachers need to know in a world of Generative AI 
  • Teacher preparation in an age of Generative AI
  • Whose data? Who is learning? The complex realities of learning in an age of Generative AI 
  • Ethical and Equity Implications of Generative AI in Teacher Education 
  • The Economics of Generative AI and Teacher Education 
  • Cultural Sensitivity and the Deployment of AI in Diverse Educational Settings 
  • Assessing the Impact of Generative AI on Accessibility and Inclusion in Teacher Education 
  • Generative AI, Social Justice, and Educator Preparation. 

The Approach: 

In addition to an open call for proposals, we also intend to invite scholars to submit articles from those who have participated in events held by the AACTE Committee on Innovation and Technology (I & T Committee). Since the spring of 2023, the I & T Committee has held a series of webinars and online Lunch and Learn sessions focused on generative AI in teacher education. Researchers and practitioners familiar with AI tools shared policies, procedures, and practices with the AACTE community, leading to rich forward-thinking conversations about this timely topic. We will continue to hold these events leading up to a featured session at the AACTE 2025 Annual Meeting in Long Beach, CA, where some of these scholars and I & T Committee members will be presenters. 

  • Editors:
    Valerie Hill-Jackson, Ph.D., Texas A&M University
    Cheryl Craig, Ph.D., Texas A&M University
  • Guest Co-Editors:
    Guy Trainin, Ph.D., University of Nebraska- Lincoln
    Laurie Bobley, Ed.D., Touro University
    Punya Mishra, Ph.D., Arizona State University
    Jon Margerum-Leys, Ph.D., Oakland University
    Peña L. Bedesem, Ph.D., Kent State University

Manuscript Guidelines 

Authors are encouraged to submit manuscripts that meet the following criteria: 

  • All manuscripts must be fully blinded to ensure a reliable review process. 
  • All manuscripts must meet publishing guidelines established by the American Psychological Association (APA) Publication Manual (7th edition, 2019). 
  • A manuscript, inclusive of references, tables, and figures, should not exceed 10,000 words. 
  • No more than one manuscript submission per author. 
  • Read more JTE guidelines. 
  • To submit your manuscript, please visit the JTE website. 

Timeline for Submission 

  • June 15, 2024: A 150-word bio for each author, a 300-word structured abstract, and 5 keywords due to guest editors. Email these items to jmleys@oakland.edu and the subject line should read: ‘JTE Anniversary 76(3) – Abstract’. 
  • September 1, 2024: Manuscript submission deadline for ‘Level 1’ external review; see the above guidelines. Manuscripts need to be in ‘near publication’ quality to move forward to the Level 2 review. 
  • November 15, 2024: Level 1 – External peer review completed. 
  • December 10 through January 10, 2025: ‘Level 2’ review by guest editors; feedback is provided to prospective authors on a rolling basis. 
  • Noon (CST) Saturday, February 1, 2025. All final manuscripts must be received in the Sage online system for consideration of publication in JTE’s 75th anniversary issue on Generative AI, 76(3). The publication date is targeted for May 2025.