top of page

Governing AI: Why Boards Need to Use It to Understand It

Updated: Mar 26


Four people in business attire having a discussion at a conference table with a laptop, papers, and a large window in the background.

A recent ICAEW panel on AI governance surfaced a question that many boards of directors still treat as an IT matter: how to govern artificial intelligence within the organization. The panelists raised awareness effectively. Awareness, however, is no longer sufficient. In 2026, boards must move from vigilance to competence.

This essay examines what that competence actually requires. Risk catalogues are well documented elsewhere. This concerns what it means in practice for a board to effectively govern a tool it often barely understands.


AI is a full governance issue

When AI appears in governance discussions, the conversation follows a predictable arc: list the risks (bias, hallucinations, cybersecurity, compliance), recommend caution, conclude that the board should "pay more attention." The advice is sound but incomplete.

A board of directors assesses opportunities alongside risks, then makes decisions. AI follows this principle like any other investment.

The opportunity is substantial. AI accelerates data analysis, improves decision quality when properly structured, reduces operational costs, opens new markets, and allows smaller teams to accomplish what previously required far greater resources. At the ICAEW panel, Peter Lee cited a founder who now delivers in one day with four people what took fifty people two months. The exact numbers may vary. The trend does not.

The risks are genuine and numerous. Model drift over time, the erosion of critical thinking when teams stop verifying AI outputs, the expansion of the cyber attack surface, the ethical implications of autonomous decision-making. All are legitimate concerns. A board that sees only risk, however, misses its core function. Worse, it may choose the worst possible path: doing nothing or locking everything down while the market advances.

The question for a board moves beyond "is AI risky." Of course it is. The substantive question is what tools and processes capture the opportunities while minimizing the risk of errors, which can be severe. And how do we support our teams through this transition?


Why hands-on competence changes everything

There is considerable talk about "AI literacy" as a governance goal. The language is precise. Literacy does not develop through reading reports or attending webinars. It develops through use.

When a board member uses an AI tool for a few hours, they discover what no briefing can convey. The tool excels at summarizing a 40-page document, identifying patterns in a dataset, or structuring complex analysis. The same tool produces perfectly worded responses containing factual errors buried in flawless paragraphs. It invents sources with the confidence a consultant might envy. It approves flawed arguments with polite enthusiasm instead of challenging them.

This dual experience of both strength and limitation fundamentally changes the questions asked around the boardroom table. A director with direct experience asks "how are our teams verifying the quality of what AI produces?" rather than "are we using AI?" They ask "what controls exist for high-risk use cases, and who is accountable?" rather than "are we compliant?"

Without that direct experience, the board depends entirely on other people's narratives. Management reports on productivity gains. Consultants sell AI strategies. Vendors promise intelligent solutions. The board approves, lacking reference points to distinguish genuine advances from well-packaged trends.


Laptop screen showing a memo on AI governance and risk, with discussion topics listed. Warm, cozy office setting with blurred background.

What this means in practice for governance

A board serious about AI governance must act on several fronts simultaneously.

Strategically, it must map AI usage across the organization: which departments, which tasks, at what risk levels, under what supervision. This mapping is the foundation for any serious governance. Many organizations, when asked to produce one, discover they cannot.

Operationally, it must examine verification and quality control processes. When an AI tool produces financial analysis, legal documents, or client communications, who reviews it? Against what standards? How often? Human review mechanisms are not innovation brakes. They are the precondition for reliable innovation.

The board must also ensure teams are supported through this transition. Support means training, certainly, but also genuine listening. What is their experience? Where do they encounter difficulties? What uses have they developed that might merit formalization or scaling? The most useful intelligence on how AI operates in practice comes from daily users, not management presentations.

On the ethical front, the board must distinguish between what regulation permits and what the organization, its employees, and stakeholders find acceptable. As Peter Lee noted at the ICAEW panel, regulatory compliance and ethical acceptability are separate matters. A company can meet every requirement of the EU AI Act and adopt practices its workforce finds unacceptable.

Directors must develop enough hands-on familiarity with these tools to exercise real judgment. They need not become technical experts. Sufficient direct experience allows them to pose intelligent questions, assess whether the answers hold up, and tell a genuine advance from an impressive theater production.


The symmetrical trap

Two modes of governance lead to failure.

  • The first is rushing forward seduced by productivity promises while skipping the necessary safeguards.

  • The second is locking everything down out of fear, banning all usage, and leaving employees who could benefit from these tools to fend for themselves, often through personal solutions outside the organization's control. Pauline Norstrom, at the ICAEW panel, described exactly this situation: CIOs instructed to block everything, and teams missing opportunities they could legitimately capture.

Sound governance sits between these poles. It requires understanding what AI enables, identifying where it creates value, governing high-risk uses, supporting teams, and building feedback loops that allow the organization to learn and adjust.

To do this with discernment, you need a concrete sense of your subject. Board competence in AI governance develops through practice. A few hours of attentive use exceed ten hours of reading.

 

Marie Horodecki-Aymes is the founder of MHA Insights Inc., a consulting firm specializing in ESG strategy, responsible marketing, and AI governance. A Chartered Administrator (Adm.A., C.Adm.), she brings over 20 years of experience in brand strategy and sustainability in retail and consumer goods across Europe and Canada. She helps organizations navigate their sustainability transition and their responsible integration of AI.

Source: “AI Governance” panel, ICAEW Corporate Governance Conference, March 6, 2026.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page