See it

One of the growing problems in institutional AI adoption isn’t technical risk. It’s governance by familiarity. People who have played with a few tools, generated some slides, or automated a few emails quickly become visible as the “AI people” in their organisations. Because they are enthusiastic, often accessible, and appear ahead of the curve, they are invited into strategic conversations that should actually be shaped by those with expertise in information systems, digital governance, data management, risk management, and digital transformation / organisational change.

In other words, institutions often confuse early use with deep understanding.

This is particularly visible in higher education. Universities are currently rushing to formulate AI strategies and staff development plans. Yet many of these conversations are disproportionately influenced by self-proclaimed power users whose main qualification is experimentation. They know prompts. They know interfaces. They know which chatbot writes a cleaner paragraph. What they often do not know or have little expertise and experience is how enterprise systems integrate, how data flows create compliance exposure, how vendor lock in emerges, how information assurance works, or how technology decisions reshape institutional processes over time.

This distinction matters. And it matters deeply.

Using AI effectively is not the same as governing AI responsibly. One is about individual experiences. The other is about systems, accountability, and long-term institutional consequences.

Say it

Seeing the problem is only the first step. Someone then must be willing to say it.

When the loudest voices in AI conversations are not the most qualified voices, silence becomes part of the problem. This is not about dismissing experimentation or putting boundaries on enthusiasm. Curiosity has value and innovation often begins there. But there is a difference between appreciating experimentation and allowing it to define institutional direction unchallenged.

Individuals across the organisation have a role here. Academics, professional services staff, digital leads, and technical specialists need to be willing to name when AI discussions are drifting towards convenience, rather than competence. They need to be willing to ask uncomfortable questions. Are we consulting the people who understand systems integration? Are we involving those who understand governance and compliance? Are we mistaking confidence with tools for expertise in technology management?

Too often these questions remain unasked because there is social momentum behind the visible enthusiast. Nobody wants to slow progress or appear resistant. Yet saying it is not resisting innovation. It is protecting it from becoming shallow.

Sort it

Once the issue is named, the next responsibility is to help sort it. Not by criticising from the sidelines, but by actively redirecting the conversation towards the expertise that is missing. That means ensuring information systems scholars, enterprise architects, cyber security teams, legal specialists, and digital transformation professionals are not brought in as afterthoughts. It means having all of them in the same room, not sequentially consulted once decisions have already started to solidify.

That matters because the central questions raised by AI adoption do not sit neatly within one professional domain. They overlap. Decisions about introducing a new tool are simultaneously decisions about data governance, procurement exposure, systems interoperability, staff practice, regulatory compliance, and future organisational dependency.

Sorting it therefore requires more than adding a token specialist to a working group. It requires changing the instinctive way institutions respond to AI momentum. Instead of asking who has used the tool most, they need to ask who understands the institutional implications best. Instead of rewarding visibility and confidence, they need to create conversations where scrutiny, caution, and systems thinking are treated as forms of progress rather than barriers to it. Sorting it, then, is partly structural and partly cultural. Institutions need better governance mechanisms, but individuals need better reflexes.

See the governance gap before familiarity is mistaken for expertise.
Say clearly when visibility and enthusiasm are being allowed to outweigh institutional competence.
Sort it by shifting the conversation from individual tool use to systems level understanding.

By Professor Savvas Papagiannidis