Something interesting happened recently in a conversation with Claude. I had been using a series of prompts recommended by Daniel Pink to do a kind of personal audit, and based on those conversations, I made some genuine changes. But I also noticed something that gave me pause.
Claude concluded that I was spending way too much time on administrative tasks and not enough on creative and research work. And while there is probably a kernel of truth in that, it was not quite right. The reality is that I lean heavily on AI for administrative tasks, and far less so for research and creative work, where most of my thinking happens in conversation with colleagues, on walks, or just away from the screen. Claude cannot see that work. What it can see is how I use Claude.
In other words, Claude was making inferences about my whole professional life based on how I have been using Claude. It reminded me of something I tell my students: they assume that because I teach, most of my time must go to teaching. In reality, it is about 40%. The AI was making the same natural, but limited, assumption. It was seeing the visible part of an iceberg and mapping the whole thing.
That was a useful insight on its own. But it pointed somewhere more interesting.
I recently listened to a discussion on the AI in Education podcast about bias in AI grading systems. One recommendation was straightforward: reduce the contextual information you give the AI about students. Remove names, gender, ethnicity. Strip away the signals that could activate bias. The less context, the less opportunity for those patterns to distort the evaluation.
That logic applies to me, too. The less context Claude has about me, the less it can stereotype or misread my work patterns. But here is where the conundrum arrives.
Context is precisely what makes AI more helpful.
Take a concrete example. Let me say, hypothetically, that I have a medical condition that makes me significantly less effective between 3 and 5 PM. If I want AI to help me plan my work week strategically, knowing that fact would make a real difference. It could help me schedule demanding intellectual work for the morning and reserve lighter tasks for those two hours. Without that context, I am just getting generic planning advice.
This is the lesson we already learned the hard way with social media. Early location-sharing felt like a fun, low-stakes way to connect. Foursquare check-ins were charming until they weren’t. The lure of personalization is real. The cost is often invisible until it isn’t. We traded something for convenience, and many of us are still sorting out what exactly we gave away.
For our own data, adults get to make that call. It is a tradeoff, and reasonable people will land in different places depending on their values, their risk tolerance, and how much they trust the platforms they use.
But student data is not ours to trade.
This is where I want to be unequivocal. The legal frameworks around student data, FERPA in the United States among them, exist for good reasons. Student data belongs to students and their families. When we use AI tools in educational settings, we are not making personal decisions about our own information. We are making decisions about children and young people who have not consented, who may not fully understand the implications, and who deserve protection.
So the practical guidance here is not subtle. Use only systems that are legally and contractually committed to protecting student data. Minimize the information you expose, even when a tool feels helpful. Resist the temptation of a quick AI fix that requires feeding it student names, identifiers, or demographic information.
The conundrum for adults using AI tools is real and worth sitting with. The tradeoff between context and privacy is genuinely complex.
For students, it is not a conundrum at all. It’s a responsibility.

