See it

One of the growing problems in universities is not simply AI adoption. It is AI expertise inflation.

Over the last two years, a striking number of academics have suddenly become ‘AI experts.’ Some have never previously researched information systems, digital technologies, data infrastructures, automation, platforms, algorithms, digital transformation, or technology governance. Yet, as AI has entered the zeitgeist, it has become an attractive label through which academics can reposition their existing interests.

This is not an argument against interdisciplinary engagement. Marketing, HRM, strategy, accounting, operations, entrepreneurship, and education all have important things to say about how AI is used, interpreted, and governed. The problem is not that different disciplines are engaging with AI. The problem is when topical proximity is mistaken for domain expertise.

AI is not just a pedagogical tool, a productivity aid, or a convenient object to attach to existing research agendas. It is embedded in sociotechnical systems. It raises questions about data quality, system design, organisational change, governance, accountability, infrastructure, automation, risk, power, work, and institutional transformation. These are not new questions. They are precisely the kinds of questions that information systems academics have been studying for decades.

When universities treat AI as a generic technology trend, rather than as a complex sociotechnical phenomenon, they risk flattening expertise. They elevate visibility over depth. They create the impression that a few workshops, some prompt engineering, and a handful of LinkedIn posts are enough to claim authority over one of the most consequential technological shifts in contemporary society.

Say it

Seeing the problem is only the first step. It also needs to be said clearly. Universities are now making strategic decisions about AI in teaching, assessment, research, governance, and professional practice. Those decisions should not only be shaped by those who have most recently rebranded themselves around AI. They should also be shaped by those with sustained expertise in information systems, digital innovation, data governance, digital ethics, and technology implementation.

This requires the information systems community to be more confident in naming its sociotechnical expertise. Information systems academics do not need to claim ownership over every AI conversation. AI clearly requires interdisciplinary debate. However, they do need to challenge the assumption that confidence with tools is the same as expertise in digital technologies, data, systems, governance, and organisational change.

Saying it is not about dismissing enthusiasm. Nor is it about excluding colleagues from other disciplines. It is about recognising that AI conversations need conceptual grounding, methodological care, and awareness of long-standing research traditions that already speak to digital technologies in organisations.

It is also about asking better questions. Who understands the systems being introduced? Who understands the data being generated? Who understands the governance implications? Who understands the organisational consequences? Who can distinguish between using AI, studying AI, managing AI, and governing AI? Without these questions, universities risk allowing visibility, speed, and confidence to substitute for expertise.

Sort it

To sort the problem of AI expertise inflation, universities need to distinguish more carefully between AI users, AI enthusiasts, and AI experts. All three may have something valuable to contribute, but they should not be treated as interchangeable.

They also need to build AI working groups that include genuine expertise, rather than defaulting to those who are simply most visible or most vocal. This means involving information systems academics, digital governance specialists, data protection experts, cyber security colleagues, learning technologists, professional services staff, and disciplinary experts in the same conversations, rather than consulting them separately after decisions have already started to take shape.

Universities should also recognise information systems as central to institutional AI conversations. AI is not only a teaching issue, an assessment issue, or a productivity issue. It is also an information systems issue. It concerns data, infrastructure, implementation, interoperability, governance, accountability, risk, organisational change, and the relationship between technology and work.

Sorting it also requires a cultural shift. Instead of asking who has used AI tools most visibly, universities need to ask who understands the institutional implications most deeply. Instead of rewarding speed and confidence, they need to create spaces where scrutiny, caution, conceptual grounding, and systems thinking are treated as valuable contributions.

Although AI is new in some respects, many of the organisational, ethical, infrastructural, and governance problems it raises are not. Universities need interdisciplinary debate on AI, but they also need to recognise the difference between engaging with AI and being an AI expert. That distinction matters because AI expertise inflation allows visibility, enthusiasm, and topical engagement to stand in for sustained expertise in digital technologies, data, systems, governance, and organisational change.

• See AI expertise inflation before visibility is mistaken for depth.
• Say clearly when confidence with AI tools is being allowed to stand in for sustained expertise.
• Sort it by building institutional AI conversations around systems knowledge, governance expertise, and genuine interdisciplinary engagement.

By Dr Oliver Kayas