Will AI Eventually Replace a Board Member?

Can AI replace a corporate board member? With 55% of directors wanting a colleague replaced and most lacking AI literacy, the boardroom competency gap is the real risk. Ken Stasiak on why AI won't take board seats — but will expose who shouldn't have one.

Have one in mind?

According to PwC’s 2025 Annual Corporate Directors Survey, 55% of directors do — that’s more than half the boardroom saying a fellow member should be replaced. The highest number ever recorded.

Now pair that with this: 66% of directors admit they have limited to no AI knowledge. So the majority of boards have someone they want gone and a critical skills gap they can’t fill.

Should one of those seats go to an algorithm?

Before you dismiss it, consider what’s already happened.

In 2014, a Hong Kong venture capital firm appointed an AI called VITAL to its board. It had voting power on investment decisions. In 2016, a Finnish software company did something similar. A World Economic Forum survey found that 45% of IT professionals believed AI directors would be appointed to boards by 2025- well maybe 2026 🙂

That didn’t happen. But something arguably more disruptive did.

AI Is Already in the Boardroom. It Just Doesn’t Have a Nameplate.

Directors are using AI right now — to summarize financial reports, draft committee notes, prep for meetings, analyze risk scenarios, and review legal documents. A recent analysis found that around 60-70% of boards report using AI to support governance work and strategy discussions.

But here’s the part that matters: none of those AI tools have a vote, a fiduciary duty, or legal accountability. The human directors do. And therein lies the problem.

When AI shapes the analysis that drives a board decision, but the humans signing off don’t fully understand how that analysis was generated — who’s really making the call? The director? Or the model?

Oxford Law researchers are already calling this out. They argue that the EU AI Act is creating two new fiduciary duties: AI due care and AI loyalty oversight. In plain English, directors now have a legal obligation to understand the AI tools influencing their decisions. Not at a technical level — but enough to ask the right questions and challenge the outputs. Technological literacy is becoming a baseline fiduciary competence.

Failing to interrogate an algorithm’s assumptions isn’t just a knowledge gap anymore. It may be a breach of duty.

Why AI Can’t Actually Sit on Your Board

Let’s be clear: AI cannot legally serve as a board director. Not today, and likely not anytime soon. Here’s why:

Corporate law in every major jurisdiction — Delaware, the UK, Germany, Hong Kong — requires directors to be natural persons. Fiduciary duty, personal liability, duty of care and loyalty all attach to humans. You can’t depose an algorithm. You can’t hold a model accountable in a derivative suit. When something goes wrong, someone has to answer for it, and “the AI did it” isn’t a legal defense.

The VITAL experiment proved this. Despite the headlines, the AI was given “observer status” only. The other directors retained all legal authority. It was a marketing stunt dressed up as innovation.

And there’s a deeper issue. Board governance isn’t just about processing information — it’s about judgment, relationships, political dynamics, and the ability to challenge a CEO face-to-face. AI can summarize a 200-page risk report in seconds. It can’t read the room when the CFO is being evasive. It can’t build trust with a management team over years of shared context. It can’t navigate the human complexity that defines real governance.

The Real Question Boards Should Be Asking

The question isn’t whether AI should replace a board member. It’s whether your board has the courage to replace the wrong humans with the right ones.

Here’s the uncomfortable math:

PwC says 55% of directors think a colleague should be replaced — the highest number ever recorded. McKinsey says boards with AI-savvy directors outperform peers by nearly 11 percentage points in return on equity. MIT research confirms that organizations with digitally literate boards consistently beat their industry average, while those without fall almost 4% below.

Yet only 35% of boards have fully incorporated AI into their oversight role. Nearly 40% of directors say they don’t receive sufficient AI education. And most boards still route AI oversight to an already overloaded audit committee rather than creating dedicated governance structures.

The gap isn’t artificial intelligence. It’s artificial confidence — boards that believe they’re governing AI effectively because they’ve added it to an agenda item, without building the literacy, structure, or accountability to actually do it.

RELATED POSTS

Discover more from Stasiak

Subscribe now to keep reading and get access to the full archive.

Continue reading

[mailpoet_form id="5"]
[mailpoet_form id="1"]