Most universities now have an AI policy. A document — sometimes quite a good one — that articulates principles: academic integrity, responsible use, transparency, human oversight. Something that signals the institution has thought seriously about this and arrived at a considered position.
And then, in departments and faculties across the same institution, individual academics are making up their own minds about what to do in their specific unit, with their specific students, in their specific assessment context. Because the policy, however thoughtful, doesn’t tell them what to actually do on Monday morning.
This gap — between institution-wide principle and unit-level practice — is where most universities are stuck right now. And it’s a more solvable problem than it might appear, provided you approach it as an implementation challenge rather than a policy one.
The four things that actually need to happen
When I look at institutions that are navigating the AI transition well — not just managing the anxiety, but genuinely improving how they develop and evidence student capability — they tend to be doing four things concurrently. Not sequentially. Concurrently.
They are redesigning assessments. They are building faculty capability. They are developing student AI literacy. And they are establishing governance that is light enough to be followed and clear enough to be useful.
Each of these is necessary. None of them is sufficient on its own. And the order in which most institutions attempt them — policy first, everything else later and separately — is one of the main reasons progress is slow.
Assessment redesign
This is the hardest and most important piece, and the one institutions most consistently underinvest in.
The core task is straightforward to describe and genuinely difficult to execute: for each program or unit, ask what capability you are trying to develop, what valid evidence of that capability looks like, and whether your current assessment tasks generate that evidence in a world where AI can complete them. In many cases they don’t — not because the academics who designed them were careless, but because the conditions have changed faster than the infrastructure.
What good assessment redesign looks like varies by discipline and level. In some contexts it means shifting toward oral assessment — structured professional conversations that cannot be delegated to a language model. In others it means portfolio-based approaches where the process of development is evidenced alongside the product. In others it means genuine work-integrated assessment where the task exists in a real professional context and the AI-assisted version is either obviously inferior or beside the point because the environment itself provides verification.
What it almost never means is simply adding an AI disclosure requirement to an existing task and calling it done. Disclosure doesn’t change what the task is measuring. If the task could be completed by AI before disclosure requirements were added, it can still be completed by AI after they were added — the student just has to tell you they did it.
The honest version of assessment redesign requires looking at tasks that have often been running unchanged for years, asking whether they were ever generating the evidence they claimed to generate, and being willing to answer honestly when the answer is no. That conversation is uncomfortable. It’s also overdue.
Faculty enablement
Faculty are being asked to make consequential decisions about AI — in their assessments, in their feedback practices, in their classroom policies — with widely varying levels of support and wildly varying levels of personal familiarity with the tools involved.
Some academics have spent significant time with generative AI tools and have developed genuinely sophisticated views about what they can and can’t do, where they’re useful and where they’re not. Others are making policy decisions about AI based on a few hours of use or, in some cases, none at all. Both groups are being asked to implement the same institutional policy.
Effective faculty enablement is not a one-day professional development workshop. It is an ongoing process of building practical familiarity with AI tools, connecting that familiarity to the specific assessment and design challenges of each discipline, and creating structured space for academics to share what they’re learning from their own experiments.
The academics who are finding their footing fastest are almost always those who have been given permission to experiment — to try AI-permissive assessment designs, observe what happens, and bring that evidence back to a community of practice. The ones who are most stuck are those who have been given a policy to implement without the practical support to implement it intelligently.
Institutions that want to move at pace need to invest in this. Not as a compliance exercise but as genuine capability building — and they need to design it so that the people with the deepest disciplinary knowledge are shaping the approach for their context, rather than receiving a generic solution from a central team.
Student AI literacy
Students are using AI. This is not a hypothesis. They are using it to varying degrees of sophistication, with varying levels of understanding of what it’s actually doing, and with varying levels of awareness of where it helps them learn and where it substitutes for learning in ways that will catch up with them later.
Student AI literacy — real literacy, not a checkbox module in week one — means helping students develop a working understanding of how generative AI tools function, what they’re good at, what they’re not, and how to use them in ways that genuinely augment their capability rather than bypassing the development of it.
This last distinction matters most. The difference between using AI to generate a first draft you then critically evaluate, restructure, and genuinely engage with — and using AI to generate a final product you lightly edit and submit — is a difference in whether you’re learning anything. Students, many of whom are under significant time and financial pressure, need to understand that distinction not as an integrity issue but as a self-interest issue. The credential has a use-by date. The capability doesn’t.
Literacy programmes that frame AI purely through an integrity lens miss this. Students aren’t well served by being told what not to do without being helped to understand what they should do instead — and why it matters for them, not just for the institution’s assessment outcomes.
Governance that enables rather than paralyses
The governance conversation around AI in universities tends to go in one of two directions. Either it produces frameworks so cautious and hedged that they provide no practical guidance to anyone, or it produces rules so specific that they become obsolete within a semester as the tools evolve.
What works better is governance built around principles and decision rights rather than rules and prohibitions. Clarity about what kinds of decisions can be made at unit level, what requires faculty-level consistency, and what requires institutional sign-off. Clear escalation pathways for genuinely novel situations. And a review cadence that acknowledges the tools will keep changing and builds adaptation in rather than treating the current policy as permanent.
The phrase I find most useful here is responsible innovation with guardrails. Not “here is what you cannot do” as the primary frame — that’s a governance posture built for a stable environment, and this is not a stable environment. But rather: here are the principles that should govern your decisions, here is the support available to help you make them well, and here is how we will learn together as a community what good looks like in our specific context.
Guardrails, not walls. The distinction matters because walls stop movement entirely, and institutions cannot afford to stop moving.
Why these four things need to happen together
The temptation is to sequence these. Get the governance right first, then redesign the assessments, then train the faculty, then address student literacy. It feels more orderly.
It doesn’t work, for a simple reason: each of these four things informs the others. Assessment redesign without faculty capability produces redesigned tasks that academics don’t know how to implement or mark. Faculty enablement without assessment redesign produces knowledgeable academics with nowhere to apply what they’ve learned. Student literacy without redesigned assessments produces students who understand AI better but are still being assessed in ways that don’t reward or require that understanding. And governance without all three produces policy that floats above practice without connecting to it.
The institutions that are navigating this well have found ways to run these workstreams in parallel — not perfectly synchronised, not requiring everything to be resolved before anything moves, but genuinely concurrent and genuinely connected.
A note on pace
One thing worth naming: the pace of AI development means that whatever your institution decides this semester will need revisiting next semester. This is uncomfortable for organisations — and especially for universities — that are built around stability, precedent, and considered change.
The institutions that will serve their students best over the next five years are not necessarily the ones with the most sophisticated AI policy today. They are the ones building the internal capability to keep asking the right questions, keep updating their practice, and keep connecting their decisions to evidence of what is actually happening in their classrooms and assessment outcomes.
The goal is not to get AI right. The goal is to build an organisation that can keep getting it less wrong over time.
That’s a more achievable ambition — and, I’d argue, the honest one.
Meg Knight is Director of Learning & Operations (International) at Online Education Services (OES). She writes about online education, learning design, and the future of higher education.
Leave a comment