“We want AI.” We hear this from clients more and more often. The problem is that most of them don’t know exactly what they want, don’t have their data ready, and expect miracles. After two years of building ML capability in our company, we have a realistic perspective on where AI works in enterprise — and where it doesn’t.
Where to Start — Use Cases, Not Technology¶
Don’t start by choosing a framework (TensorFlow vs. PyTorch). Start with the question: what business problem am I solving? Our first successful use cases:
- Churn prediction: insurance company — which clients will leave? Gradient boosting model, 82% accuracy. ROI: 15% churn reduction = millions of CZK annually.
- Anomaly detection: bank — suspicious transactions. Isolation Forest, 40% reduction in false positives.
- Document classification: insurance company — automatic sorting of incoming documents. NLP + classifier, 91% accuracy.
Data Readiness — 80% of the Work¶
An ML model is only as good as its data. And data in a typical Czech enterprise company is… suboptimal. Duplicates, missing values, inconsistent formats, data silos. Before you start modeling, you need:
- Data audit — what you have, where it is, what’s the quality
- Data pipeline — ETL/ELT into an analytical repository
- Feature engineering — transforming raw data into model features
- Governance — who owns the data, GDPR compliance, access control
For the insurance company churn prediction, we spent 3 months cleaning and preparing data, 2 weeks training the model. The ratio reflects reality.
MLOps — A Model in Production Is Just the Beginning¶
Training a model in a Jupyter notebook is something any data analyst can do. Getting that model into production and keeping it there — that’s an engineering problem. The MLOps stack we use:
- MLflow for experiment tracking and model registry
- Airflow for training pipeline orchestration
- Docker + Kubernetes for serving (Flask API in a container)
- Prometheus + Grafana for prediction monitoring
Model drift is a real problem. A churn model trained on pre-COVID data stopped working after COVID — client behavior changed. Automatic retraining with accuracy monitoring is a necessity.
Expectations vs. Reality¶
Management expects: “AI will solve our problem in a month.” Reality: data preparation 3 months, PoC 1 month, productionization 2 months, iteration ongoing. Total of 6–9 months to value. And not every use case pays off.
Build vs. Buy¶
For standard use cases (OCR, sentiment analysis, translation) — cloud AI services (Azure Cognitive Services, AWS Comprehend). Cheaper and faster than building your own model. For domain-specific problems (churn at a Czech insurance company) — build your own, because pre-trained models don’t understand local specifics.
AI Is a Tool, Not Magic¶
Start with a clear business problem. Invest in data before models. Plan for MLOps from the start. And above all — have realistic expectations. AI in enterprise isn’t a ChatGPT demo — it’s an engineering project like any other.
Need help with implementation?
Our experts can help with design, implementation, and operations. From architecture to production.
Contact us