In August 2023, OpenAI launched ChatGPT Enterprise and the corporate world took notice. Finally, an official answer to the question that troubles every CISO: “How do we give employees ChatGPT without compromising company data?” We’ve been through several enterprise deployments and share what works — and where the pitfalls lie.
Why a Corporate VPN and “Don’t Send Anything Sensitive” Isn’t Enough¶
Most companies in the first half of 2023 responded to the ChatGPT boom with an internal email: “Do not use ChatGPT for company data.” The result? Employees used it anyway — just secretly. Shadow AI is a real problem, and banning it doesn’t solve it. A managed deployment with clear rules does.
The issue with public ChatGPT is simple: data sent via the API can be used for model training (unless you use the API with opt-out). For regulated industries — banking, insurance, healthcare — this is unacceptable. GDPR, banking secrecy, NIS2 — regulations don’t make exceptions for “but it’s useful.”
Azure OpenAI — Data Stays in Your Tenant¶
Our recommended approach: Azure OpenAI Service. GPT-4 and GPT-3.5 Turbo models run in an Azure datacenter (West Europe for Czech clients). Data is not shared with OpenAI, not used for training, and stays within your Azure tenant. Private endpoint, VNET integration, managed identity — the standard Azure security stack.
For clients who already have an Azure Enterprise Agreement, deployment is relatively straightforward. Resource group, RBAC, diagnostic logs to Log Analytics — nothing new under the sun. Only the model is new.
ChatGPT Enterprise vs. Azure OpenAI — Which to Choose?¶
ChatGPT Enterprise (directly from OpenAI) is a managed SaaS — user interface, admin console, SSO via SAML. Advantage: zero infrastructure. Disadvantage: limited customization, vendor lock-in, the data processing agreement is with OpenAI (US jurisdiction).
Azure OpenAI is API-first — you build your own frontend and have full control over infrastructure. Advantage: compliance (EU data residency), integration with your existing Azure stack, custom RAG pipeline. Disadvantage: you have to build it yourself.
For Czech enterprise clients, we typically recommend Azure OpenAI. Reason: GDPR compliance, existing Azure contracts, and most importantly — the need for integration with internal systems, not just a chat window.
Secure Deployment Architecture¶
A typical architecture we deploy:
- API Gateway (Azure API Management) — rate limiting, authentication, logging of all requests
- PII filter — Azure AI Content Safety + custom regex for national ID numbers, account numbers, internal codes
- Azure OpenAI with private endpoint — no public internet
- Prompt injection protection — system prompt validation, input sanitization
- Audit log — who, when, what was asked, what response was received → Log Analytics → Sentinel
- Cost management — token budget per user/department, alerting on anomalies
Governance — Rules of the Game¶
Technology is half the solution. The other half is governance:
Acceptable Use Policy: what may and may not go into the LLM. Specific examples — “allowed: summarizing public documents, drafting emails without client data. Not allowed: client personal data, internal financial data, core system source code.”
Data classification: linked to the existing data classification scheme. Public → OK. Internal → OK with PII filter. Confidential → prohibited. Secret → prohibited.
Training: without training, every policy is useless. Employee workshops: what is an LLM, how it works, why it hallucinates, how to write effective prompts, what to never send.
Measurable Results¶
At one client (a mid-sized insurance company, 800 employees), we measured after three months: 73% of employees actively use the internal ChatGPT. Average time savings of 45 minutes per day for knowledge workers. Zero security incidents. Shadow AI usage dropped from an estimated 40% to less than 5%.
Costs: Azure OpenAI GPT-4 Turbo for 800 users — approximately €1,400/month. ROI was achieved within 6 weeks.
Common Deployment Mistakes¶
“We’ll deploy it over the weekend”: technically possible, but governance takes weeks. Legal review of the DPA, security assessment, training — plan for 6–8 weeks for regulated industries.
No monitoring: without logging, you don’t know if someone sent the entire client database into a prompt. An audit trail isn’t nice-to-have — it’s a must-have.
Inflated expectations: ChatGPT isn’t an Oracle. It won’t give you precise numbers from your ERP. For that, you need RAG over structured data.
Secure Deployment Is Possible — But Not Trivial¶
ChatGPT in enterprise isn’t about technology — it’s about governance, compliance, and change management. Azure OpenAI handles the technical layer. The rest is on you. And that “rest” is 70% of the work.
Need help with implementation?
Our experts can help with design, implementation, and operations. From architecture to production.
Contact us