Stop building AI tools. Start with the learner’s problem.
The EdTech industry has developed a curious habit: we announce the solution before we’ve properly understood the question.
In the past eighteen months, I’ve sat through dozens of conversations that follow a familiar shape. Someone presents a new AI tool — a content generator, an adaptive quiz engine, an AI tutor. The demo is impressive. The cost savings are real. The slide deck is polished.
And then someone in the room asks: “What problem does this solve for the learner?” The silence that follows tells you everything.
This isn’t a criticism of AI in education. I’m a believer, and I’ve spent significant time and budget building the case for it. But I’ve come to think that the industry’s dominant approach to AI adoption has it backwards. We start with capability — what the tool can do — and then work forwards, searching for a use case to justify it. The result is a proliferation of features that are technically impressive and pedagogically inert.
There’s a more disciplined way to work. And it starts with a deceptively simple question: what does the learner actually need to achieve?
The capability trap
AI tools arrive with a catalogue of features. Summarisation. Content generation. Automated feedback. Each one represents genuine engineering effort. Each one is legitimately useful — somewhere, for someone, in some context. The problem is that “this tool can generate a first draft in seconds” is not a learning design brief. It’s a production efficiency claim.
When we adopt tools on the basis of capability, we end up designing backwards. We find tasks the tool can do, then retrofit those tasks into the learning experience. The seams show. Learners notice. Educators notice too — they just don’t always have the language to say why something feels off.
What feels off is that the tool was designed to be used, not to serve a specific learning outcome.
Start further back
The teams I’ve seen do this well follow a different sequence. Before they evaluate any tool, they spend time on three questions. First: what is the learner trying to accomplish, specifically — not the module outcome, but the real-world capability they’re building? Second: where in that journey do they get stuck, give up, or lose confidence? Third: what would meaningfully change if that friction were removed?
Only once you’ve answered those questions with specificity — based on data, conversation, and honest observation — does it make sense to go looking for a tool. And often, when you do that work carefully, the tool you need is far simpler than the one you were originally excited about.
At OES, we call this the “curated core” approach: a deliberately limited set of AI applications, each tightly mapped to a defined learner or educator need. Not because we lack ambition. Because we know that a single well-implemented tool that solves a real problem is worth more than ten features looking for a purpose.
The question that changes the conversation
I’ve started using one question as a forcing function in every AI-related conversation: “If this tool works perfectly, what does a learner do differently?” Not “what does the course team produce faster” — what does the learner do differently?
It’s a harder question than it sounds. Most AI adoption conversations in education are, at their core, efficiency conversations dressed up as learning conversations. That’s legitimate — efficiency matters, and resources are finite. But it’s a different problem from improving learning, and conflating the two is how we end up with AI-generated content that’s fast and forgettable.
The institutions getting this right are the ones that insist on treating them as separate questions, answered in the right order.
The tools we build reflect the questions we ask first. If we start with capability, we build features. If we start with the learner, we build something worth using.
The learner-first diagnostic: five questions before you adopt
- What specific task or transition is the learner struggling with in this context?
- How do you know? What data, observation, or direct feedback tells you this?
- If that friction were removed, what would the learner be able to do that they can’t do now?
- Does this AI tool address that friction directly — or does it make something easier for the course team?
- How will you know if it worked? What learner behaviour changes, not what you produced faster?
Leave a comment