The standard playbook for change management was designed for a world where change was a project with a beginning, a middle, and an end. AI does not work that way. Applying legacy frameworks to AI adoption produces the illusion of progress while the underlying resistance compounds.
The project model fails
Traditional change management assumes a stable target state. You define the future, build a path to it, manage the transition, and declare success. The organisation learns to tolerate the new system. Over time, the new becomes normal.
AI introduces a fundamentally different dynamic. The system evolves. Its outputs shift as data changes, as usage patterns shift, as the world it was trained on recedes into the past. There is no stable target state. There is only ongoing adaptation.
Resistance is information
Most change programmes treat resistance as friction to be overcome. In the AI context, this is a mistake. Resistance from frontline staff often reflects genuine knowledge about why a system’s outputs do not match operational reality. Suppressing that resistance removes a critical feedback mechanism.
Organisations that build AI systems designed to capture and respond to internal resistance outperform those that treat adoption as a communication challenge. The question is not: how do we get people to accept this system? The question is: what is the resistance telling us about the system?
Continuous calibration
The effective model for AI change is not a project. It is a practice. Teams need ongoing forums for surfacing discrepancies between system behaviour and operational need. They need clear channels for escalating anomalies. And they need leadership that treats recalibration as a sign of maturity, not failure.
This requires a different kind of organisational capacity — one that most change management curricula do not address. It also requires a different kind of AI partner: one that builds calibration into the relationship, not just the contract.
The organisations that navigate AI adoption successfully are not those with the best launch plans. They are those with the most honest feedback loops.
CONTINUE READING
OPERATIONS — MAY 2026
The Hidden Cost of Generic AI
What off-the-shelf AI tools rarely disclose about the organisational friction they introduce.
READ MORE →MEASUREMENT — MAY 2026
Measuring AI Impact That Actually Matters
Defining success in terms of operational outcomes — before a single line of code is written.
READ MORE →