Someone on Your Team Fed Confidential Information to AI This Morning

You don't know who. You don't know what. You have no log of it.

But it happened. Probably before your first meeting.

Maybe your executive assistant pasted your travel itinerary into ChatGPT to draft a briefing document. Your meetings. Hotel locations. Arrival times. Security details. Now on OpenAI's servers.

Maybe someone in finance uploaded the acquisition target's financials to summarize for the deal team. Material nonpublic information. In a training dataset.

Maybe your outside counsel fed litigation strategy into Claude to help draft a motion. Privileged work product. Privilege potentially waived.

Maybe HR uploaded performance reviews to help write termination documentation. Employee names. Compensation. Performance issues. Reasons for dismissal.

You won't know it happened. Unless it becomes a breach. Then everyone knows.

The Samsung Wake-Up Call

April 2023. Three Samsung engineers. Twenty days. Three separate leaks to ChatGPT.

Source code. Chip specifications. Internal meeting transcripts. All uploaded by people trying to work faster. All now in training data serving 100 million users.

Samsung banned ChatGPT within weeks. Apple banned it. Amazon sent warnings after noticing ChatGPT responses that looked like internal Amazon data. JPMorgan, Goldman Sachs, Citigroup, Bank of America, Deutsche Bank, Wells Fargo, Verizon, Northrop Grumman, Accenture. All restricted or banned access.

The Samsung story made headlines because it involved engineers and source code. Technical data leaked through technical tools. Easy to understand.

But Samsung is the wrong story.

The bigger exposure isn't in engineering. It's everywhere else.

The Leaks You're Not Thinking About

Your executive assistant manages your calendar across time zones. Coordinates complex travel. Prepares briefing documents. Drafts communications.

AI makes all of that faster. Upload your schedule, get a formatted itinerary. Paste meeting notes, get action items. Feed in background documents, get a briefing memo.

What's in those uploads?

Your location on specific dates. Who you're meeting. Where you're staying. When you're arriving. Whether you're traveling with security. Whether you're traveling alone.

For most people, that's calendar data. For a CEO, a board member, a principal with a security profile? That's targeting information.

A kidnapping team in Latin America would pay real money for that itinerary. Your assistant gave it away because ChatGPT writes better briefing docs.

The Finance Problem

Your M&A team is working an acquisition. Due diligence documents everywhere. Financial models. Projections. Integration plans.

Someone needs to summarize the target's financials for the steering committee. Forty pages into two. Tedious work. ChatGPT does it in seconds.

What just happened?

Material nonpublic information now lives on external servers. Terms of service say it may be used for training. The model serving your competitors just learned something about a deal you haven't announced.

Securities violation? Ask your GC. Better yet, ask if their team is doing the same thing.

The Outside Counsel Problem

Your law firm bills $1,500 an hour. For that money, you expect confidentiality. Privilege. Protection.

79% of lawyers used AI in 2024. Only 10% of law firms had written policies.

Your outside counsel is using AI on your matters right now. The associate drafting your contract at midnight uses ChatGPT to check clause language. The partner preparing your board presentation uses Claude to summarize key points. The litigation team uses AI to analyze documents.

Attorney-client privilege requires confidentiality. Share privileged information with a third party, privilege may be waived.

Your lawyer pastes your M&A term sheet into ChatGPT. They've shared it with OpenAI. OpenAI's terms say they may use inputs for training. Still confidential? Courts haven't decided. The question is on the table.

The American Bar Association issued Formal Opinion 512 in July 2024. Lawyers must understand how AI tools store data. Must know whether inputs train models. May need client consent before using AI on certain matters.

How many of your outside firms have compliant AI policies?

How many of your matters have already been fed through public models?

You don't know. Neither do they.

The HR Problem

Terminations require documentation. Performance issues. Warnings given. Reasons for separation. Severance terms.

HR needs to write it all up. Legal needs to review. Documentation needs to be tight because terminated employees sue.

AI writes better first drafts than most HR generalists. Upload the performance file, get a termination memo.

What's in that upload?

Employee names. Social Security numbers. Compensation data. Medical leave information. Disability accommodations. Performance ratings. Documented conflicts.

All on external servers. Covered by someone else's privacy policy.

One plaintiff's attorney in discovery: "Did you use AI tools to prepare this documentation? Which tools? What information did you input?"

Now you're explaining why employee PII lives on OpenAI's servers.

The Security Problem

Your security team produces threat assessments. Travel risk analysis. Protective intelligence summaries. Incident pattern analysis.

AI is good at this work. Feed in data, get analysis. Upload reports, get summaries. Input indicators, get pattern recognition.

What might your security team ask?

Summarize our threat reporting on Mexico. What patterns appear in these incident logs? Assess risk for this principal's travel to São Paulo. Review our intelligence on this individual. What security vulnerabilities exist at this facility?

Every prompt exposes something. Your threat methodology. Your identified vulnerabilities. Your intelligence sources. Your principals' travel patterns. Your security gaps.

A foreign intelligence service would pay millions for that information. A kidnapping team would pay thousands. An insider threat would pay attention.

Your analyst gave it away. Trying to write a report faster.

Why The Bans Failed

Samsung banned ChatGPT. The ban lasted until employees found workarounds.

Personal phones. Home laptops. Browser tools that don't require downloads. Incognito windows. Personal accounts.

IT calls this Shadow AI. Technology use they can't see, can't log, can't control.

The productivity gap was too large. People who use AI work faster. They produce more. The tools are good at summarizing, drafting, analyzing, formatting.

You can write a policy. You cannot enforce it.

Every company that banned public AI faced the same question. Not "how do we stop this?" but "how do we enable this safely?"

What Secure Architecture Looks Like

The Fortune 500 spent three years on this. Panic. Bans. Shadow AI. Then new infrastructure.

Private AI instances inside the corporate cloud. Azure OpenAI. AWS Bedrock. Contracts that prohibit using your data for training. Data that never leaves your perimeter.

A gateway layer between users and models. Every prompt passes through. Scans for Social Security numbers, credit cards, confidential markings. Blocks sensitive data before it reaches the model. Logs every query.

Retrieval systems that let AI access internal documents without learning from them. Ask a question. System pulls relevant files. Feeds them as context. Generates an answer. Model forgets everything. Never becomes training data.

Browser controls that catch people before they paste into public tools. Warning. Block. Redirect.

Not walls. Airlocks.

The Board Question

If you're a director, ask this at your next meeting:

What's our AI usage policy? What tools are employees actually using? What controls prevent confidential data from reaching public models? What logging do we have? What's our outside counsel's AI policy? Has anyone audited what's already been exposed?

Most boards can't answer these questions. Most management teams can't either.

The exposure lives in the gap between what policy says and what people do.

Three Options

Ban public AI. Write policies. Send reminders. Watch productivity gaps widen. Push usage underground. Hope nothing leaks.

Ignore the problem. Assume your people follow rules they have every incentive to break. Wait for the breach notification. Or the regulatory inquiry. Or the plaintiff's attorney asking what AI tools were used.

Build infrastructure. Give your people AI capability inside architecture you control. Visibility into what's being asked. Logs for compliance. Data that stays yours.

What We Built

I spent years running security operations in Latin America. Managing thousands of personnel. Protecting information where the threat was real.

When AI tools started creating a new category of data exfiltration risk, I built what I would have wanted.

Centinela Vault. Dedicated environment. Encrypted private instances. Your data never touches public AI models. No third-party API exposure.

Gateway layer with data loss prevention, access controls, full audit trail. Retrieval architecture that lets you query internal information without training external models.

For full air-gap: dedicated infrastructure. Private cloud or on-premise. Nothing leaves your environment.

Your executive assistant is going to use AI. Your finance team. Your outside counsel already does. Your HR team. Your security team.

The question isn't whether they use it. The question is whether you control where your data goes.