top of page

Governing AI: Why Boards Need to Use It to Understand It


Four people in business attire having a discussion at a conference table with a laptop, papers, and a large window in the background.

A recent ICAEW panel on AI governance brought back a question that many boards of directors still treat as an IT matter: how to govern artificial intelligence within the organization. The panellists did a solid job of raising awareness. But awareness is no longer enough. In 2026, boards need to move from vigilance to competence.

That competence is what I want to discuss here. Not the usual catalogue of AI risks, which has been well covered elsewhere. But what it actually means, in practice, for a board to effectively govern a tool it often barely understands.


AI is a full governance issue

When AI comes up in governance discussions, the conversation tends to follow a familiar pattern: list the risks (bias, hallucinations, cybersecurity, compliance), recommend caution, and conclude that the board should “pay more attention.” That’s true, but it’s not enough.

A board of directors does not merely assess risks. It assesses risks AND opportunities, and makes decisions accordingly. AI is no exception.

The opportunity is real and substantial. AI can accelerate data analysis, improve decision quality when properly governed, reduce significant operational costs, open new markets, and allow smaller teams to accomplish what previously required far greater resources. At the ICAEW panel, Peter Lee cited a founder who now delivers in one day with four people what used to take fifty people two months. The exact numbers are debatable; the trend is not.

The risks are real too, and I don’t minimize them. Model drift over time, the erosion of critical thinking when teams stop verifying AI outputs, the expansion of the cyber attack surface, the ethical implications of autonomous decision-making. All of these are genuine concerns. But if the board only sees risk, it misses the point of its role. And worse, it may make the worst possible decision: doing nothing, or locking everything down, while the market moves ahead.

The real question for a board is therefore not just “is AI risky” (yes, it is), but rather: what tools and processes do we put in place to capture the opportunities while minimizing the risk of errors, which can be severe? And how do we support our teams through this transition?


Why hands-on competence changes everything

This is where I’d like to bring a perspective I find insufficiently present in current discussions about AI governance.

There’s a lot of talk about “AI literacy” as a goal to achieve. It’s the right word. But literacy doesn’t develop by reading reports or attending webinars. It develops by using the tools.

When a board member uses an AI tool themselves, even for just a few hours, they discover things that no briefing can convey. They discover that the tool is remarkably effective at summarizing a 40-page document, identifying patterns in a dataset, or structuring complex analysis. They also discover that the very same tool can produce a perfectly worded response containing a factual error buried in an otherwise flawless paragraph, invent sources with the confidence that would make any consultant blush, or politely approve a flawed argument instead of challenging it.

It’s this dual experience, of both the strengths and the limitations, that changes the nature of the questions asked around the boardroom table. A director who has seen this firsthand won’t ask “are we using AI?” but “how are our teams verifying the quality of what AI produces?” They won’t ask “are we compliant?” but “what controls have we put in place for high-risk use cases, and who is accountable?”

Without that direct experience, the board remains dependent on other people’s narratives. Management says AI is transforming productivity. Consultants sell AI strategies. Vendors promise intelligent solutions. And the board nods along, because it lacks the reference points to tell a real advance from a well-packaged trend.



Laptop screen showing a memo on AI governance and risk, with discussion topics listed. Warm, cozy office setting with blurred background.

What this means in practice for governance

A board that wants to govern AI seriously needs to act on several fronts simultaneously.

Strategically, it must require a clear map of AI usage across the organization: which departments, for which tasks, at what risk levels, under what supervision. This mapping is the starting point for any serious governance, and many organizations still don’t have one.

Operationally, it must look at verification and quality control processes. When an AI tool produces a financial analysis, a legal document, or a client communication, who reviews it? Against what criteria? How often? Human review mechanisms are not a brake on innovation; they are the condition for its reliability.

On the human side, and this is an aspect that’s often overlooked, the board must ensure that teams are supported through this transition. Supported means trained, yes, but also listened to. What is their experience? Where are they running into difficulties? What uses have they developed on their own that might deserve to be formalized or scaled? The best intelligence on how AI is actually being used in an organization often comes from the people using it daily, not from management presentations.

On the ethical front, the board must draw a clear line between what the law permits and what is acceptable to the organization, its employees, and its stakeholders. As Peter Lee noted at the ICAEW panel, regulatory compliance and ethical acceptability are two distinct things. A company can be fully compliant with the EU AI Act and still adopt practices its employees find unacceptable.

And on the matter of the board’s own competence, directors need to develop enough hands-on familiarity with these tools to exercise their judgment. Not to become technical experts. But to have enough direct experience to ask the right questions, assess the credibility of the answers they receive, and tell the difference between an impressive demo and a genuine advance.


The symmetrical trap

I’d like to close on a point I consider essential.

There are two ways to govern AI poorly. The first is to rush in headlong, seduced by productivity promises, without putting the necessary safeguards in place. The second is to lock everything down out of fear, ban all usage, and leave employees who could benefit from these tools to fend for themselves, sometimes using personal solutions outside the organization’s control.

Pauline Norstrom, at the ICAEW panel, described exactly this situation: CIOs instructed to block everything, and teams missing out on real opportunities.

Good governance sits between these two extremes. It requires understanding what AI enables, identifying where it creates value, governing high-risk uses, supporting teams, and building feedback loops that allow the organization to learn and adjust over time.

And to do all of this with discernment, you need a concrete sense of what you’re talking about. Which brings us back to the starting point: board competence in AI governance comes through practice. A few hours of attentive use are worth more than ten hours of reading on the subject.

 

Marie Horodecki-Aymes is the founder of MHA Insights Inc., a consulting firm specializing in ESG strategy, responsible marketing, and AI governance. A Chartered Administrator (Adm.A., C.Adm.), she brings over 20 years of experience in brand strategy and sustainability in retail and consumer goods across Europe and Canada. She helps organizations navigate their sustainability transition and their responsible integration of AI.

Source: “AI Governance” panel, ICAEW Corporate Governance Conference, March 6, 2026.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page