The AI-anything solves it all nonsense…

I sat through another AI Tool Solution demo last week, and the frustration was real.. The deck was slick. The language was confident. Words like personalised, adaptive, and transformative arrived in every third sentence, right on schedule. However, when my team asked about specifics or any details, we were told those features were ‘in development’,…


I sat through another AI Tool Solution demo last week, and the frustration was real..

The deck was slick. The language was confident. Words like personalised, adaptive, and transformative arrived in every third sentence, right on schedule. However, when my team asked about specifics or any details, we were told those features were ‘in development’, and we all know what means…


This is the pattern.

The EdTech market is flooded right now with AI-powered learning tools that have three things in common: a beta product, a polished website, and absolutely no evidence that they work.

I don’t say that to be cynical. I say it because I’ve spent years working on the harder problem — what actually helps a learner acquire a skill, apply it, and retain it. That problem is not solved by a language model with a friendly interface. It’s solved by understanding the learner first, the technology second.

The demo I sat through last week had no answer to the most basic questions:

  • Who is this learner, and what are they trying to do?
  • How does this tool fit into the learning experience — before, during, or after instruction?
  • What does success look like, and how are you measuring it?
  • What happens when the AI gets it wrong?

These aren’t niche concerns for education researchers. They’re the minimum bar for any product claiming to change how people learn.


The hype is doing real damage.

When institutions chase AI tools because they feel pressure to be seen as innovative, they make purchasing decisions without evidence. When those tools underdeliver — and many will — it erodes trust in the legitimate innovations that do work.

The learners who lose most in that cycle are the ones who could benefit most: adult workers reskilling mid-career, students without strong support networks, people for whom a failed learning experience isn’t just frustrating — it’s expensive.

AI has genuine, meaningful potential in education. Intelligent feedback, adaptive content sequencing, early identification of learners at risk of disengaging. These are real use cases with real evidence behind them.


What I’d like to see more of.

Before signing anything, ask three questions:

What does the learner experience look like on a bad day — when the AI is wrong, or the learner is struggling? Good products are designed for failure, not just success.

How does this integrate with how learning actually happens in your context? A tool that exists outside the learning journey isn’t a learning tool. It’s a feature looking for a problem.

Where is your efficacy data, and who collected it? If the answer is “we’re still gathering that,” you’re funding their research.


The revolution in education won’t come from the best-funded demo. It’ll come from the tools that are built around learners — not around the pitch deck.

We’re not there yet. But we should stop pretending we are.


Meg Knight writes about workforce education, product strategy, and learning that works in the real world.


Leave a comment