The client (today known under its international brand name) is the largest alternative delivery platform in the Czech Republic and one of the fastest-growing logistics players in Central Europe. With more than 500,000 parcels per day, a network of over 3,000 parcel lockers, and a presence in dozens of countries, this is an infrastructure on which thousands of e-shops and millions of end customers depend.
Our task was to design and implement a new logistics information system that would replace the ageing legacy platform and enable the client to scale operations to the next order of magnitude — without losing a single parcel.
Challenge¶
Legacy System at Capacity Limits¶
The client’s original information system grew organically alongside the company. The monolithic PHP application with a relational database served faithfully in the early days, but with the exponential increase in parcel volume it ran into fundamental limits. Processing the daily data import from carriers took hours instead of minutes. Adding a new carrier required weeks of manual integration. And every Black Friday meant overnight shifts for the ops team, manually scaling infrastructure and hoping the system would hold.
Logistics Chain Complexity¶
The client is not a simple courier service. It is a sophisticated logistics ecosystem encompassing:
- Automated depots with sorting lines and robotic handlers
- Parcel lockers — IoT devices with their own firmware, connectivity, and remote management
- Mobile applications for drivers, depot operators, and end customers
- Integration with 50+ external carriers — from Czech Post through DPD to local delivery companies in Romania
- Cross-border logistics with customs documentation and multi-country compliance
Each of these domains had its own requirements for data consistency, latency, and availability. Integrating them into a single coherent system required a fundamental architectural approach.
Zero Tolerance for Downtime¶
In logistics there is no “maintenance window”. Parcels are processed 24/7, depots run in shifts, and end customers expect real-time information about their parcel. Any outage means real financial losses — unprocessed parcels, delivery delays, escalations from e-shops. The availability requirement was 99.95% — and we delivered 99.97%.
Solution¶
Event-driven Microservices¶
The core of the new system is an event-driven architecture built on microservices. Each bounded context — tracking, routing, depot operations, billing, partner integration — runs as an independent service with its own database (database-per-service pattern). Communication between services is asynchronous via the RabbitMQ message broker.
This approach enables:
- Independent deployment of individual services without affecting the entire system
- Horizontal scaling — bottleneck services (e.g. tracking under peak load) can be scaled independently
- Fault isolation — the failure of one service does not bring down the entire system
- Technological heterogeneity — computationally intensive services in .NET, data pipelines in Python
Real-time Tracking Engine¶
The heart of the system is a tracking engine processing hundreds of thousands of status updates per day. Every parcel passes through dozens of states — from order receipt through depot intake, sorting, dispatch, loading onto a vehicle, delivery to a parcel locker, through to customer collection.
The tracking engine is implemented as a stream processing pipeline:
- Ingestion layer — receiving events from IoT sensors, barcode scanning, driver GPS data, and carrier APIs
- Processing layer — validation, deduplication, enrichment (adding geolocation, ETA calculation) and business rules evaluation
- Storage layer — event store for complete history + materialised views for fast queries
- Notification layer — push notifications, SMS, email and webhooks for e-shops
The entire pipeline processes events with a median latency below 200ms from ingestion to notification.
Multi-tenant Partner Platform¶
E-shops and partners access the system via an API gateway that provides:
- REST API with OpenAPI specification for standard integrations
- Webhook endpoints for real-time notifications about status changes
- Bulk import/export for large e-shops processing thousands of parcels per day
- Self-service portal with dashboards, reports, and delivery rule configuration
The multi-tenant architecture ensures data isolation between partners while sharing infrastructure. Each partner has its own API keys, rate limits, SLA, and billing profile.
Parcel Locker Management System¶
Parcel lockers are fully automated collection points distributed across the Czech Republic and other countries. Each locker is essentially an IoT edge device — with its own control system, connectivity (4G + Wi-Fi fallback), temperature and humidity sensors, camera system, and electromechanical locks.
The management system provides:
- Remote monitoring — status of every locker, capacity, temperature, connectivity
- OTA firmware updates — secure firmware updates via an encrypted channel
- Slot optimisation — a compartment allocation algorithm maximising capacity utilisation
- Predictive maintenance — anomaly detection (e.g. overheating, repeated lock failures) and automatic service ticket creation
Mobile Applications¶
We delivered three mobile applications sharing a common codebase:
- Customer app — parcel tracking, delivery point selection, opening the parcel locker via NFC/QR
- Driver app — optimised routes, delivery confirmation, parcel scanning
- Depot app — parcel intake, sorting and dispatch, stocktaking
Architecture¶
Infrastructure¶
The entire system runs on Azure Kubernetes Service (AKS) with a multi-region deployment for high availability. The infrastructure is fully defined as code using Terraform — from AKS clusters through managed databases (Azure Database for PostgreSQL) to networking and monitoring.
Key infrastructure components:
- AKS clusters — production (3 node pools: system, compute, memory-optimised), staging, dev
- Azure Database for PostgreSQL — Flexible Server with read replicas for reporting
- Azure Cache for Redis — session cache, rate limiting, real-time counters
- RabbitMQ — self-hosted on AKS (clustered, 3 nodes) for event streaming
- Azure Blob Storage — documents, labels, parcel photos
- Azure CDN — static assets and tracking widget for e-shops
Observability¶
The monitoring stack includes:
- Grafana dashboards — business metrics (parcels/min, delivery SLA), technical metrics (latency, error rate, resource utilisation)
- Prometheus — metrics collection from all microservices
- Loki — centralised logging
- Distributed tracing — end-to-end tracing of requests across services
- Alerting — PagerDuty integration with escalation policies
CI/CD Pipeline¶
Each microservice has its own CI/CD pipeline:
- Build & test — unit tests, integration tests, contract tests (Pact)
- Security scan — SAST, dependency vulnerability check, container image scan
- Staging deployment — automatic deployment to staging environment
- Canary release — gradual rollout to production with automatic rollback on error rate increase
- Post-deploy verification — smoke tests and synthetic monitoring
Data Migration¶
Migration from the legacy system was carried out in several phases using the dual-write pattern — data was written simultaneously to both the old and new system, with reads gradually switched over to the new system. This approach enabled:
- Zero downtime during migration
- Instant rollback to the legacy system in case of issues
- Incremental verification of data consistency
- Migration completed in 3 months without the loss of a single parcel
Results¶
Measurable Benefits¶
After 12 months of development and gradual deployment, the system delivered significant improvements in key metrics:
- 3× faster parcel processing — from depot intake to dispatch, processing time was reduced from an average of 45 minutes to 15 minutes thanks to automation and workflow optimisation
- 40% reduction in error rate — automatic validation, deduplication, and the business rules engine eliminated the majority of manual entry and sorting errors
- Real-time visibility — partners and customers see the current parcel status with a latency below 1 second from the event
- 99.97% uptime — exceeding the SLA target of 99.95%, including seamless handling of Black Friday at 3× normal volume
- 50+ integrated carriers — the standardised integration interface reduced carrier onboarding from weeks to days
Operational Benefits¶
- Self-service for partners — e-shops configure delivery rules, monitor metrics, and resolve complaints themselves without needing to contact support
- Predictive maintenance of parcel lockers — automatic problem detection reduced the number of service interventions by 30%
- Scalability — the system is ready for 2× the current volume without architectural changes
Strategic Impact¶
The new logistics IS became the backbone of the client’s technological transformation. It enabled expansion into additional countries without needing to build separate systems, opened the path to new products (same-day delivery, cross-border), and gave the company a competitive advantage in the form of superior customer experience and operational efficiency.
Technology¶
The project uses a modern technology stack optimised for high throughput and availability. Backend microservices combine .NET (for transactional logic with high performance demands) and Python (for data pipelines, ML models for ETA predictions, and automation scripts). PostgreSQL serves as the primary database with Redis for caching and RabbitMQ for asynchronous messaging. The entire infrastructure runs on Azure Kubernetes Service and is fully managed via Terraform. Monitoring is provided by Grafana with Prometheus and Loki stack.