Sunday, March 29, 2026

The Corridor Conversation Deserves a Room of Its Own

 I just got back from Philadelphia, where I spent a few days at the SITE conference, hoping to catch the pulse of where AI in education research is heading. I had a genuinely wonderful time. The people were sharp, the conversations were warm, and there was something quietly reassuring about realizing that the researchers you respect are wrestling with the same questions keeping you up at night.

But I left with a nagging feeling I couldn’t quite shake.

The formal papers left me with a sense of “and…”. This is not because the sessions were not good; many were, and some were excellent. It was just a sense of slow progress. Usually, this is how research advances, and that is fine. But in the age of AI, I felt that it was almost indulgent.

The conference format itself may be the problem. We submit proposals months in advance about research we wrapped up even earlier. By the time we stand at the podium, we are essentially reporting from a different era. In most fields, a year-long lag between doing and sharing is an inconvenience. In AI research right now, it is equivalent to geological time. We are presenting postcards from a past that no longer exists.

The Two Speeds A split image or two-panel illustration contrasting "The Speed of AI" (a fast, dense, chaotic network of nodes and connections) with "The Speed of Academic Research" (a single, elegant, slow-moving pendulum or hourglass). In a black and white comic book style

So here is what I keep thinking about: what if we flipped the whole thing?

Keep the research sharing, but strip it down to the essential finding. A nugget, not a novella. Tell me what you learned, and your evidence, and then let’s get to work. Because the real value of getting a few hundred serious thinkers into the same building isn’t the formal presentations. It’s what happens in the hallways, waiting for the elevator, over coffee, at the margins. The corridor conversation is where the good stuff lives. Why are we so committed to keeping it out of the rooms?

An unconference model built around AI in education could do something genuinely useful. Picture it in four movements. First, we share what we are actually doing right now. Not a polished study with clean findings, but live, messy, in-progress work. The experiment still running. The instrument we’re not sure about yet. The classroom observation that hasn’t found its theoretical frame. Second, we surface emerging technical solutions and research tools while they are still warm enough to shape. Too often, by the time a new instrument reaches the field through traditional channels, half the community has already improvised its own version, and nobody knows what anybody else is measuring, or everyone is using an instrument that was quickly put together with the notion that we will fix it later, though later never comes. Third, we find the collaborators to move forward with large scale studies. Some of the most generative research partnerships I’ve seen started with someone saying “wait, you’re looking at that too?” in a hotel lobby at 10pm. SITE has created these in the past but let’s build that moment into the schedule. And fourth, we stress-test ideas before they calcify. Bring your half-formed hypothesis, your shaky design, your nagging methodological doubt, and subject it to the kind of rigorous, generous pushback that only happens when you’re in a room with people who actually care and have no incentive to be polite about bad ideas. Here’s the part that excites me most: we could even do research on site. Instrument development happening in real time, with the expertise in the room feeding directly back into the design. That’s not a conference. That’s a lab with better snacks.

But there’s a larger argument underneath all of this, and I think we need to say it plainly. If we want our research to shape the direction of AI in education rather than simply document its wake, we cannot afford to keep working in parallel silos, each of us producing careful (sometimes barely powered) studies that trickle out through journals on an eighteen-month delay while the technology rewrites the classroom underneath us. The speed of AI is not going to slow down to match the pace of peer review. So we have to build something that can move alongside it.

What I am imagining is a kind of AI in education brain trust. Not a new professional organization with dues and bylaws and a nominating committee. We have enough of those. Something leaner and more intentional. A networked group of researchers who agree to aggregate what we know in real time, share findings quickly and in plain language, and respond together when the field needs guidance. A parallel research infrastructure, less well-funded than the AI labs driving these tools, but not beholden to their interests either. Our independence is the asset. The research community knows things about learning, about classrooms, about equity, about what teachers can realistically sustain, that no product team is going to discover on its own. The problem is that knowledge is scattered across conference presentations, working papers, faculty websites, and email threads between people who happened to meet in Philadelphia. A brain trust would gather it, synthesize it, and get it into the hands of practitioners and policymakers fast enough to actually matter.

Because here is what keeps me up at night. The decisions being made right now about how AI enters classrooms, which tools get adopted, what counts as learning, what gets automated and what gets protected, those decisions are being made with or without us. The question is whether the research community shows up to the conversation early enough to influence it, or whether we arrive, as usual, with a beautifully designed retrospective study about something that already happened.

Let’s bring the corridor back into the room. And then let’s build a room that the whole field can use.

Friday, March 27, 2026

The Jagged AI Frontier in Schools

Last week I joined a presentation on AI in education. During Q&A, a student asked what I thought was the most grounded question of the session: what is actually happening in K-12 schools right now? How are they responding?

It is a question I get a lot, and I find that my answer keeps getting cleaner the more I chat with schools.

Ethan Mollick’s concept of the “jagged frontier” was built to describe what AI can and cannot do, but I think it applies just as well to how schools and school systems are navigating AI. The response is uneven, messy, and genuinely interesting to watch. After working across a range of systems in the US and paying close attention to what is happening globally, I see four distinct patterns.

All-In

Some school systems and even entire countries have decided to ride the wave as a matter of national agenda. Singapore is the clearest example, treating AI integration in education as a strategic priority rather than something to manage or contain. Its EdTech Masterplan 2030 lays out a whole-nation vision for what it means when a government decides to lead rather than follow. China has taken a similar posture, although the size and complexity of its educational system make it internally jagged. South Korea went all-in early; and has since pulled back in striking fashion, which is its own fascinating case study in what happens when adoption outpaces readiness. The AI textbook rollout stalled at around 30% adoption, became politically polarized, and was ultimately reclassified as optional after less than a semester. Worth watching closely.

In the US, private systems like Alpha Schools were early movers. They have received significant criticism, though I think it is worth noting that much of the criticism is about the quality and implementation of the AI they are using rather than about whether the fundamental direction is wrong. The question of whether AI belongs in K-12 classrooms is a different question from whether this particular school is doing it well.

All-Out

Other school systems have gone the opposite direction, prohibiting AI use entirely. I understand the instinct, even when I disagree with the outcome. These systems are often responding to real and legitimate concerns, and some of them will probably shift as the pressure to adapt grows.

Tip-Toe

This is where I find most of the school systems I work with. The pattern is fairly consistent: start by giving administrators access, then move carefully toward teachers, and only then begin the much harder conversation about where and how students fit in. It is cautious, sometimes frustratingly slow, but it is also a recognizable form of institutional risk management, and a response to the fact that we actually do not know much. We’ve had 130 years of research about reading education but less than 3 about AI in education. Let’s not pretend we know more than we actually do.

Deliberate Community

This is probably my favorite model, and it is the one the European Commission has been actively promoting. Rather than having a minister, a superintendent, or even a single teacher make the call, this model asks each community to have a real conversation about what they value, what they fear, and what role they want AI to play in their children’s education. The decision gets made by the people it affects.

I want to be clear: I do not always agree with where those community conversations land. Some communities will decide things I think are wrong. But I deeply believe in the model itself, because it is a conscious decision made together rather than something that happens to a community. That distinction matters more than any single policy outcome.

What I Think All Schools Need

Following my four-tier framework for thinking about AI in education, I am genuinely encouraged by the range of experimentation happening right now. The variation across systems is not just noise. It is data. Robust experimentation, even when messy, accelerates learning across the field.

That said, I think there are three things every school system needs to attend to regardless of where they fall on this spectrum.

The first is getting tools into the hands of teachers. Not to surveil or control student use, but because teachers cannot guide students through something they have not experienced themselves. Teacher access and teacher learning have to come first.

The second is building genuine understanding of what AI is and how it interacts with human cognition. This is not about digital literacy in the old sense. It is about helping educators and students understand something genuinely new about how knowledge is generated, evaluated, and used.

The third is taking the social, emotional, and ethical dimensions seriously. This is not a soft add-on. The risks here are real, and counteracting them requires the same intentionality as any other aspect of curriculum and instruction.

The jagged frontier in schools looks chaotic from the outside. Up close, it looks like a field trying very hard to figure out the right thing to do. I think that is actually a reasonable place to be.

Tuesday, August 19, 2025

Gotta Go

 Hi all,

If you are still interested I have moved my thinking to a different platform right now I am guyonAI on Substack:

https://guyonai.substack.com/


Where you will learn about the anxiety vacuum and other useful thoughts:



Thursday, April 25, 2024

Exploring Generative AI in Teacher Preparation Call for proposals

 Title/Theme: Exploring Generative AI in Teacher Preparation

The Challenge 

Generative AI is rapidly becoming commonplace and coupled with the availability of personal devices and one-to-one technology adoption, we need to ensure that the current and future generations of teachers understand its implications, know how to adjust their pedagogy and how to use it to assist in lesson planning, assessment, and individualizing instruction. In this call, we are specifically inviting submissions from practitioners using evidence-based strategies in both pre-service and in-service teacher education. 

Submissions might focus on (but are not limited to): 

  • Personalized Learning 
  • Intelligent Tutoring Systems 
  • Automated Grading 
  • Data Analysis and Insights 
  • AI-driven Simulation and Virtual Reality in Teacher Education 
  • Feedback on teacher performance 
  • Lesson and assessment planning 
  • Inclusion and accessibility 
  • Chatbots in Learning and self-regulation 
  • Bots for socio-emotional learning 
  • Adaptive learning 
  • AI literacy for teacher educators 
  • What do teachers need to know in a world of Generative AI 
  • Teacher preparation in an age of Generative AI
  • Whose data? Who is learning? The complex realities of learning in an age of Generative AI 
  • Ethical and Equity Implications of Generative AI in Teacher Education 
  • The Economics of Generative AI and Teacher Education 
  • Cultural Sensitivity and the Deployment of AI in Diverse Educational Settings 
  • Assessing the Impact of Generative AI on Accessibility and Inclusion in Teacher Education 
  • Generative AI, Social Justice, and Educator Preparation. 

The Approach: 

In addition to an open call for proposals, we also intend to invite scholars to submit articles from those who have participated in events held by the AACTE Committee on Innovation and Technology (I & T Committee). Since the spring of 2023, the I & T Committee has held a series of webinars and online Lunch and Learn sessions focused on generative AI in teacher education. Researchers and practitioners familiar with AI tools shared policies, procedures, and practices with the AACTE community, leading to rich forward-thinking conversations about this timely topic. We will continue to hold these events leading up to a featured session at the AACTE 2025 Annual Meeting in Long Beach, CA, where some of these scholars and I & T Committee members will be presenters. 

  • Editors:
    Valerie Hill-Jackson, Ph.D., Texas A&M University
    Cheryl Craig, Ph.D., Texas A&M University
  • Guest Co-Editors:
    Guy Trainin, Ph.D., University of Nebraska- Lincoln
    Laurie Bobley, Ed.D., Touro University
    Punya Mishra, Ph.D., Arizona State University
    Jon Margerum-Leys, Ph.D., Oakland University
    Peña L. Bedesem, Ph.D., Kent State University

Manuscript Guidelines 

Authors are encouraged to submit manuscripts that meet the following criteria: 

  • All manuscripts must be fully blinded to ensure a reliable review process. 
  • All manuscripts must meet publishing guidelines established by the American Psychological Association (APA) Publication Manual (7th edition, 2019). 
  • A manuscript, inclusive of references, tables, and figures, should not exceed 10,000 words. 
  • No more than one manuscript submission per author. 
  • Read more JTE guidelines. 
  • To submit your manuscript, please visit the JTE website. 

Timeline for Submission 

  • June 15, 2024: A 150-word bio for each author, a 300-word structured abstract, and 5 keywords due to guest editors. Email these items to jmleys@oakland.edu and the subject line should read: ‘JTE Anniversary 76(3) – Abstract’. 
  • September 1, 2024: Manuscript submission deadline for ‘Level 1’ external review; see the above guidelines. Manuscripts need to be in ‘near publication’ quality to move forward to the Level 2 review. 
  • November 15, 2024: Level 1 – External peer review completed. 
  • December 10 through January 10, 2025: ‘Level 2’ review by guest editors; feedback is provided to prospective authors on a rolling basis. 
  • Noon (CST) Saturday, February 1, 2025. All final manuscripts must be received in the Sage online system for consideration of publication in JTE’s 75th anniversary issue on Generative AI, 76(3). The publication date is targeted for May 2025. 

Monday, April 15, 2024

The Yin of AI in Education

 

Last Friday I had the chance to be part of a panel on AI at the Carson Center for Emerging Media Arts as part of a larger symposium (more here). It was a great event and I leaned a lot from the main speakers. After the morning speakers set a somewhat somber tone for the potential outcomes we were asked to try and present some of the positive outcomes that might emerge from AI (not just generative) in our respective fields.

I brought up three possible contributions to education:

1. Making teachers' lives easier. Easing the pressure on teachers by providing strategies that help reduce workload in non-instructional tasks such as assessment scoring, planning, letter and parent newsletter writing, etc. This does not replace the need to actually reduce the workload by shifting demands but augments it in ways that will free teachers to focus on what they do best—teaching students.

2. Creating differentiated plans. While curriculum authors and teacher education provide many ideas about how to differentiate instruction, the workload to differentiate instruction for every relevant lesson can be quite significant depending on class size and variability. An AI that can learn from assessment and teacher planning can become an excellent companion, allowing for robustly differentiated instruction with a record that can potentially move with students to subsequent grades or new educational environments (for example, mobility between schools). 

3. Tutoring students and supporting less qualified teachers. The Global South has been experiencing teacher shortages in rural areas, and these shortages are expanding worldwide. Tailored AI can support less qualified teachers and tutor students. While this situation is less than ideal, AI can fill in the gaps until we can create better systems to support teaching.

For these to be successful, school systems must be able to create sequestered, safe instances of AI that can be tailored and protective of student, family, and teacher data. Without such instances, schools should not use AI systems in any way that has access to student data. The goal for researchers should, therefore, be creating these instances through specialized API and examining its impact on teachers and students.

Friday, March 29, 2024

Leaving Las Vegas thinking about Computer Science Education

The SITE conference was in Las Vegas this year. It was great to connect with old friends and find new ones. While on the plane home, I want to reflect on what I learned before the hubbub of teaching and my next conference arrives. 
What did I see? I chose to focus on the work being done in Computer Science. The panel put together by Chrystalla Mouza was especially excellent. The panel had a great discussion of strategies and challenges in recruiting and retaining teachers for CS. The metaphor I settled on was the blanket that is too small. If a specific district is able to "recruit" from another school system, the problem does not get solved. It just changes location. The same can be said if we transition Math or science teachers. The three strategies that emerged were:
1. Recruit internally from areas that have a enough teachers (e.g. art, Media, English or Social Studies). For this to be successful the professional learning has to be different and address ways of thinking and projects of interest that would fit different thinkers within their domain expertise.
2. Make it part of a general campaign to go into teaching. 
3. Focus on second career/ career changers. Here there is a need for short programs and funding during the process of changing career.

The second strand was TPACK which after over 15 years of research is still one of the most often used frameworks. Punya Mishra led a series of presentations sharing the work done on the third handbook of TPACK research soon to come out. The work was varied and interesting and the variety of approaches, measurements and direction was extensive. 

Finally, and unsurprisingly there was discussion and grappling with AI everywhere I went. AI is changing everything including education in ways that we do not fully understand, but researchers from around the world are trying to apply what we learned from previous critical moments (advent of personal computing, internet, social media). 

Saturday, March 23, 2024

What am I using AI for now as a Teacher Educator and Professional

 Since Generative AI came out, I have been using it extensively. As an exercise, I am logging all the direct Generative AI I use, knowing that there is much AI in the background of which I am less aware.

Generic letters: Looking at my log, I have used generative AI to create four official letters that required carefully worded messages that were sensitive yet firm. In each case, I used Chat GPT to create an initial wording, then edited the text to bring back my writing style and some of my personality when appropriate, and finally, I ran it through Grammarly to make sure that I had no embarrassing grammar and spelling errors. The use of generative AI for composing official letters creates great efficiencies for me and reduces the response times. Interestingly, one person asked me for a letter of support that they generated with the help of GenAI as well as a starting point.

In teaching: I have used ChatGPT to create a description of the social networks between students in a classroom for an activity on creating groups in an elementary classroom. Once again, I needed to refine the prompt a few times and finally edit the document, but the result was quite good, and I created an assignment that I will keep using in the future.

I tried to see what Gen AI would produce for an in-class presentation about reading instruction. The result was VERY generic, and I ended up discarding the suggested slides, retaining the I Dall-E to create unique artwork for the slides I designed for teaching writing. While Generative AI use was limited in creating content, I continue enjoying the use of the Designer feature in PowerPoint as a way to quickly spiff up my slide decks. Since we came back from Spring break, I created a set of questions for a welcome-back exercise that went very well.

Finally, I engaged my students in using GenAI to create groupings in their classroom (mock data) to see what the benefits and challenges are. The discussion that ensued included comments ranging from amazingly fast and accurate to a student questioning whether it is worth the time after a lot of editing.

Review of academic paper: Once I read the paper I was reviewing and had the main points that I wanted to stress to the authors so they could improve their research paper, I used Cen AI to expand and explain my bulleted points. The amount of editing this exercise created for me was a very limited return on investment, and I doubt I will use it in this way again.

Podcasting: I used GenAI to create episode summaries of the Not That Kind of Doctor podcast using the transcripts as the raw material. One episode summary was well done while ina. second GenAI completely missed the point. Both needed editing but were still a major time-saving application.

Across multiple uses, I usually prompt GenAI there times before I get everything that I want (or give up). More detailed prompts yield much more accurate results and less follow-up. Grammarly let me know that it made over 6000 suggested edits. Gen AI has changed how I work; it has made some things much easier and saves me time every day. However, I am still concerned with accuracy and specificity that can be achieved only through my deep seated professional knowledge.


Friday, March 15, 2024

AI Creativity and the near future of education

AI and Creativity created by ChatGPT-4

This week, I spent two days at the Nebraska Educational Technology Association meeting in Omaha. It was great to meet with friends, colleagues, and new acquaintances. Everyone talked about AI as a catapult to changing, rethinking, worrying, and joy. Evi and I spent some time talking about how we humans are still better/ different than the machines. Much like humans have for centuries argued that humans were not like other animals, our current existential obsessive discussion (and fear) is about what happens when Artificial General Intelligence shows up. For me, the question is what we can do in the short run. The answer may very well be a focus on inquiry, creativity, and self-guided learning.

As highlighted by our work in Art TEAMS, integrating new tools and emphasizing creativity and divergent thinking in education presents a forward-thinking model that could significantly influence schools in the coming years. This approach, which blurs the traditional boundaries between art, sense-making, and metacognition, opens up several intriguing possibilities and challenges for education.

For example, schools might increasingly incorporate digital tools and platforms that foster creativity, such as digital art, coding for creative purposes, and virtual reality for immersive learning experiences. These tools can help students explore complex concepts in a hands-on manner, enhancing their understanding and using AI tools to amplify creativity and self-expression. 

By leveraging AI and other technologies, educators can create personalized learning paths for students. This customization allows students to explore subjects at their own pace and according to their interests, which can boost engagement and motivation. This can happen while we encourage metacognitive skills, thinking about one's own thinking, that can help students understand and regulate their learning processes, strengths, and areas for improvement. This self-awareness is crucial for lifelong learning and adaptability, especially in a rapidly changing world. 

This approach allows for breaking down the silos between subjects, especially art, humanities, and sciences, and encourages students to make connections across disciplines. This holistic method fosters critical thinking, creativity, and the ability to see problems from multiple perspectives. 

To reach this goal, we need to focus on teacher education and mindset shift. Teachers will need training and support to adapt to these new tools and pedagogies. Shifting from traditional teaching methods to a more student-centered, creative, and interdisciplinary approach requires time, resources, and a change in mindset. This can happen during pre-service and in-service teacher education and requires attention to equity and access. Our educational system has a robust tendency to focus on basic skills for those with perceived deficiencies who never get to interact with more complex educational experiences.

 As AI and other technologies become more integrated into education, maintaining a balance between technological efficiency and the human elements of learning—such as empathy, ethics, and emotional intelligence—will be vital. The potential for human flourishing through this educational paradigm is significant. By fostering an environment where creativity, critical thinking, and personal reflection are paramount, students can develop not only the skills needed for future careers but also the capacity for resilience, empathy, and ethical decision-making. This approach not only prepares students to thrive in a world where AI plays a significant role but also ensures they contribute positively to their family, community, and society.

Sunday, March 10, 2024

Is the iPad still a thing for teaching and learning?

 
image of a man training using an iPad for work in a modern office environment.
Produced by Chatgpt

The iPad was set to reshape the landscape of personal computing when it was first introduced by Apple on January 27, 2010, and its market release in April of the same year. Conceived by Steve Jobs and his team as a middle ground between the iPhone and Macbook, the iPad aimed to fill the gap for a portable computing device that was more capable than a smartphone but more accessible and user-friendly than a laptop. I remember the day it was introduced and the subsequent juvenile jokes about its name, "pad". Like many other Apple products, the initial release failed to realize its full potential. I was all in on the iPad during the day of its release and got one that August. 

Since then, I have been a faithful user of iPads and a frequent podcaster about using iPads in the classroom. However, my personal use of the iPad has transformed over time. My personal use is about 75% entertainment. Looking at the Graph below I can show that most of the use is over the weekend and for entertainment. I use the iPad for work-related social media (in fact, almost all of my social media is work-related). I use the iPad as an extra screen in places I do not have availability or if I need a third screen, and occasionally, I use my iPad to grade student assignments. However, the device that used to be my favorite is now my fourth most used Apple device after the phone, laptop, and watch. In some ways, Apple missed the potential in the educational market, which they finally ceded to Google and Chromebooks. 

Despite these considerations, I remain a strong proponent of using iPads in early childhood education, extending through to third grade. The iPad stands out as the most user-friendly and accessible device for young learners, thanks to its intuitive design and interface that graciously accommodates developing fine motor skills. Additionally, the iPad distinguishes itself in the realm of the arts. Its capabilities for music composition and creation, alongside digital visual arts, are truly remarkable, making it an invaluable tool for fostering creativity and artistic expression in students.