I was sitting in my office when the email came through. A board member had taken the company’s annual financial projections and cap table, dropped them into the free version of ChatGPT, and sent the output — copy and paste, no context, no methodology — to the entire board.
I almost fell out of my chair.
Not because the analysis was bad. Because of what just happened to that data.
Confidential financial projections. Ownership structure. Investor positions. Dilution scenarios. All of it uploaded to a consumer-grade AI tool with no enterprise data protections, no encryption guarantees, and at the time, the real possibility it could be used to train the model. In a single copy-paste, some of the most sensitive information a company holds was sitting on someone else’s servers — with no audit trail and no way to pull it back.
And nobody on the board said a word.
The Double Standard Nobody Wants to Talk About
Here’s what keeps me up at night about this. If a CISO pushed customer data into a free AI tool, they’d be terminated. If an employee uploaded financial models to an unsanctioned platform, compliance would be on the phone within the hour.
But when a board member does it with the company’s most sensitive financial data? Silence. Maybe a polite “thanks for the analysis.”
That’s not oversight. That’s a double standard — and it’s creating one of the biggest unmanaged risks in corporate governance right now.
Boards across the country are asking management tough questions about AI policies. “What guardrails do we have in place? How are we protecting data? What’s our acceptable use policy?” These are the right questions. But they ring hollow when the people asking them are simultaneously copy-pasting cap tables into ChatGPT on their personal laptops.
The Shadow AI Problem Has Reached the Boardroom
We spent the last decade fighting shadow IT — employees spinning up unauthorized cloud instances, using personal devices for work, running company data through consumer apps. We built policies, deployed monitoring, trained people, and slowly brought it under control.
Now the same thing is happening at the board level, and almost nobody is talking about it.
Directors are using AI to summarize financial reports, draft committee notes, review legal documents, and prep for meetings. Some are doing it thoughtfully with enterprise tools. Many are not. They’re using whatever’s convenient — and “convenient” usually means the free version of whatever model is trending.
The boards governing AI risk are simultaneously creating AI risk. Let that sink in.
