Every engineering leader I know is running at least one AI meeting recorder. Fathom, Otter, Fireflies, Granola — they show up in your Zoom call, they transcribe everything, they spit out a tidy summary, and everyone moves on with their day.
I’ve been thinking about this a lot lately, because the same instinct that makes these tools attractive (let AI handle the busywork so I can focus) is the exact instinct that’s creating a massive, unexamined attack surface across organizations.
The Exfiltration You Invited
Here’s what’s actually happening when you turn on an AI meeting recorder. Every word spoken in your standup, your incident review, your board prep, your 1:1 where someone mentions a customer’s PII and all of it is being captured, transcribed, processed, and stored on someone else’s infrastructure.
Fathom, for example, stores all data in the United States and uses de-identified customer data to improve their proprietary AI models. You can opt out, but how many of your engineers have opted out? Did you even know that’s a setting?
This isn’t a Fathom-specific problem. It’s a category problem. AI meeting recorders are the shadow IT of 2026, except they’re recording your most sensitive conversations instead of just syncing files to Dropbox.
I wrote recently about the minutes added to workforce framing. It is the idea that AI’s real value is capacity expansion, not cost cutting which is still true. But capacity expansion without a security model is just borrowing from a different budget. You’re gaining 30 minutes a day in meeting follow-ups and potentially giving away your competitive advantage in the process.
Build vs. Buy When “Buy” Means “Exfiltrate”
The traditional build vs. buy calculus is about time, cost, and maintenance burden, but AI tools have introduced a third variable that most frameworks ignore: data exposure as a function of the buy decision itself.
When you buy a project management tool, the data flowing through it is structured, intentional, and scoped. When you buy an AI meeting recorder, the data flowing through it is everything anyone says in any meeting it joins. There is no scoping, there is no classification and the tool captures the quarterly revenue discussion and the weekend barbecue plans with equal fidelity.
This changes the build vs. buy math in ways that most CTO/CISO conversations haven’t caught up with yet. The question isn’t just “is this tool SOC 2 compliant?” The question is: what is the blast radius if the vendor is compromised, and does the value of the tool justify that radius?
For an AI notetaker, the blast radius is effectively “every conversation your company has” and that, hopefully, should give you pause.
Enjoying this? I write about AI implementation and engineering leadership every week.
The Same Problem Is in Your IDE
Meeting recorders are the obvious case because the exfiltration is so literal: audio, video, full transcripts. The same risk pattern applies to the AI coding tools your engineers are using every day.
When your team uses Cursor or Copilot or Claude Code, context from your codebase is being sent to an LLM provider for inference. The question of which model is powering your tools and what that provider’s data retention and training policies are is a security question, not just a productivity question.
I wrote about the Agent Ops observability gap a few weeks ago… the fact that most engineering teams have no visibility into what their AI tools are actually doing with their code. That gap isn’t just an engineering management problem, but it is a security management problem. If you can’t trace what context was sent to which model, you can’t assess your exposure.
Most of the major LLM providers (i.e. Anthropic, OpenAI, Google) contractually commit to not training on enterprise customer data. But “contractually commits” and “technically impossible” are different things. And the supply chain around these tools is getting more complex, not less. Sub-processors, plugin ecosystems, third-party integrations exist and each one is another node in your attack surface.
A Framework That Actually Helps
Here’s how I think about leverage vs. pure risk for AI tools in 2026:
Classify the data, not the tool. Stop asking “is this tool safe?” and start asking “what data does this tool touch?” A coding assistant that sees your open-source projects is a different risk profile than one that sees your proprietary billing engine. A meeting recorder in your all-hands is different from one in your M&A discussion.
Default to the narrowest context window possible. Every AI tool works better with more context. Every security posture works better with less. The tension is real, and the answer is intentional scoping… not blanket access and not blanket prohibition.
Audit the opt-out, not just the opt-in. Most AI tools default to broad data collection and give you toggles to restrict it. Your security posture is defined by whether anyone actually toggled those settings. If you haven’t audited what your AI tools are collecting across your org, you don’t have a security posture beyond hope.
Treat velocity as a variable, not a constant. The argument for adopting every AI tool immediately is that the speed advantage compounds but so does the risk. The companies I’ve seen navigate this well are the ones who move fast on low-blast-radius tools and slow on high-blast-radius ones. They don’t act anti-AI but they are anti-unexamined-AI.
Does It Even Matter Anymore?
There’s a fatalist argument I hear more and more: the pace of AI adoption is so fast that trying to govern it is futile. Your employees are already using personal AI accounts on corporate devices. The horse has left the barn.
I understand the impulse, but I also spent years scaling engineering teams through hyper-growth, and I can tell you that “the pace of change makes governance impossible” is what people say right before a breach makes governance mandatory.
The pre-AI to post-AI transition isn’t just about architecture and team structure. It’s about building organizations that can move fast and reason about risk simultaneously which has always been the job… AI just raised the stakes!
Your AI notetaker is probably fine. But “probably fine” isn’t a security strategy and the decision to let it record every conversation in your company is one of the biggest security decisions you’re making this year whether you realize you’re making it or not.
This post connects to ideas from Minutes Added To Workforce, The Next Platform Giant Is Hiding in Plain Sight, and The Pre-AI to Post-AI Company Transition.