_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN
Let's talk

Logistics IS & WMS

10,000 shipments per hour. Zero lost.

We build warehouse management systems and logistics IS that control sorting hubs, courier networks, and warehouse operations in real time.

10k+ shipments/h
Throughput
<50ms
API latency P95
99.95%
Availability
0
Lost shipments

Why logistics needs specialized IS

A generic ERP warehouse module isn’t enough when you’re processing thousands of shipments per hour. Logistics operations require real-time processing, hardware integration (scanners, sorters, scales), and fault tolerance. A one-minute outage at a sorting hub means hundreds of unprocessed shipments and a cascade of delays.

Architecture that endures

Our logistics IS are built on event-driven architecture with Apache Kafka as the backbone. Every operation — shipment scan, re-sorting, vehicle loading — generates an event. Downstream systems (tracking, billing, reporting, customer notifications) consume events independently.

Why event-driven: - Resilience — Outage of one system doesn’t stop sorting. Events wait in Kafka. - Scalability — Adding new consumer (new reporting, new integration) without changing producer. - Auditability — Complete history of every shipment from events. - Decoupling — Teams work independently on their services.

Typical WMS architecture

┌──────────────────────────────────────────────────────────┐
│  DEVICES (Edge)                                           │
│  Scanners, sorting lines, scales, printers              │
│  → Edge processing, offline buffer                       │
└──────────────┬───────────────────────────────────────────┘
               │ MQTT / REST
               ▼
┌──────────────────────────────────────────────────────────┐
│  INGESTION LAYER                                          │
│  API Gateway → Event Validation → Kafka                  │
└──────────────┬───────────────────────────────────────────┘
               │
               ▼
┌──────────────────────────────────────────────────────────┐
│  PROCESSING LAYER                                         │
│  Sorting Engine │ Routing Engine │ Inventory Manager     │
│  Rules Engine   │ Exception Handler │ SLA Monitor        │
└──────────────┬───────────────────────────────────────────┘
               │
               ▼
┌──────────────────────────────────────────────────────────┐
│  INTEGRATION LAYER                                        │
│  ERP Sync │ Customer Notifications │ Partner APIs        │
│  Billing  │ Reporting │ Analytics                        │
└──────────────────────────────────────────────────────────┘

Sorting Engine — heart of the sorting hub

The sorting engine decides where a shipment goes — based on destination, weight, dimensions, priority, and SLA. The decision must be made in under 10ms because the shipment physically passes through the sorting line.

How it works: 1. Scanner reads barcode/QR → event to Kafka 2. Sorting engine queries routing rules (Redis cache, < 1ms) 3. Decision → command to sorting line 4. Confirmation → event about successful sorting

Rules change without deployment — operator modifies routing in admin UI, change propagates to cache in seconds. A/B testing of new rules on portion of traffic.

Hardware integration

We integrate with everything you’ll find in a logistics depot:

  • Scanners — Zebra, Honeywell, Datalogic. REST API or SDK.
  • Sorting lines — OPC-UA, Modbus, proprietary protocols.
  • Scales — Industrial scales with automatic reading.
  • Label printers — ZPL, EPL, PDF direct print.
  • AMR robots — REST API for task assignment, fleet coordination.

Real-time tracking

Customer sees where their shipment is — in real time, not with hour delay. Event-driven architecture enables this natively: every scan generates event, tracking service consumes it and updates status. WebSocket push for live tracking on web/app.

Scaling for peaks

Christmas, Black Friday, e-commerce campaigns — throughput grows 3-5×. Our architecture is ready for this:

  • Kafka partitioning — parallel processing, horizontal consumer scaling
  • Kubernetes HPA — automatic scale-up based on queue depth
  • Load testing — k6 simulates peaks months in advance
  • Capacity planning — we know how much we need before we need it

Monitoring and alerting

  • Business metrics — shipments/hour, sorting error rate, SLA compliance
  • Technical metrics — Kafka lag, API latency, error rate
  • Hardware monitoring — scanner status, sorters, connectivity
  • Alerting — PagerDuty/OpsGenie, escalation rules, runbooks

Dashboards in Grafana — one for management (business KPIs), one for operations (technical metrics), one for support (troubleshooting specific shipment).

Časté otázky

Receiving, warehousing, picking & packing, sorting, shipping, cross-docking, returns management. From single-warehouse to multi-depot networks with centralized control.

Edge processing on local devices. Scanners and sorting logic work offline, data synchronizes after connectivity is restored. Kafka guarantees no events are lost.

Yes. REST API, message queue, CDC — depending on what your ERP supports. SAP, Oracle, Microsoft Dynamics, custom systems. Typical integration in 2-4 weeks.

Horizontal autoscaling on Kubernetes. Load tests simulate Christmas peaks months in advance. We add capacity in minutes, not days. You pay only for what you use.

Souvisí s

Máte projekt?

Pojďme si o něm promluvit.

Domluvit schůzku