_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN
Let's talk

Core Information Systems

Systems that run your operations.

Information systems for logistics, finance and retail. High-performance backend, transactional processing, HA/DR and zero-downtime deployments.

Logistics IS & WMS

Sorting hubs, shipment management, courier systems. Event-driven architecture with Apache Kafka handles peaks of 10,000+ shipments/hour without performance degradation.

Logistics is real-time business. A shipment lost in the system for 5 minutes causes a cascade of problems — wrong sorting, delivery delays, SLA penalties. We build WMS and logistics IS where every shipment has a clear state and clear path at every moment.

Event-driven architecture is the foundation. Every scan, every re-sort, every loading generates an event to Apache Kafka. Downstream systems (tracking, billing, reporting) consume events independently. Failure of one system doesn’t stop sorting — events wait in Kafka and get processed after recovery.

Throughput and latency: Our WMS systems process 10,000+ shipments/hour in a single hub. API latency under 50ms at P95. Horizontal scaling via Kubernetes — adding capacity in minutes, not days. Load testing with k6 simulates Christmas peaks months ahead.

Hardware integration: Sorting lines, scanning gates, scales, label printers. We communicate via OPC-UA, REST API and proprietary protocols. Edge processing on devices ensures functionality even during network outages — data synchronizes after connectivity is restored.

Real-world example: For a logistics company with 15 hubs, we replaced a monolithic system with event-driven architecture. Result: 3× higher throughput, MTTR from hours to minutes, zero shipment losses in the system. ROI in 8 months.

wmsevent-drivenkafka
Detail →

Transactional Systems

Payment systems, clearing processes, accounting cores. ACID guarantees, HA/DR with automatic failover and complete audit trail for regulators.

Transactional systems have no room for “almost working”. When a payment goes halfway through, you have a problem. We build systems with ACID guarantees, where every transaction is either complete or didn’t happen at all. No inconsistent states, no lost money.

Architecture for reliability: Saga pattern for distributed transactions across multiple services. Outbox pattern for guaranteed event delivery. Idempotent API — repeated request doesn’t cause duplicate transaction. Dead letter queue for failed messages with automatic retry and alerting.

HA/DR design: Active-passive with automatic failover (RTO < 30s). Synchronous replication for zero data loss (RPO = 0). Regular DR tests — not once a year on paper, but automated monthly failover drills. Geo-redundancy for critical systems.

Audit and compliance: Every transaction logged with full context — who, what, when, from where. Immutable audit log (append-only). Regulator gets complete trail in minutes, not days. Compatible with CNB, PSD2, DORA requirements.

Performance under load: We process thousands of transactions/second with consistent latency. Connection pooling, prepared statements, optimized indexes. Load testing simulates Black Friday 2× ahead — we know the limit before we hit it.

transactionsacidha-dr
Detail →

Loyalty & Identity Systems

Loyalty engine, points system, partner integration. Millions of customers, real-time points and rewards processing with individual-level personalization.

Loyalty program is a core system, not a marketing gadget. When a customer pays and points don’t appear within 5 seconds, they call customer service. When a partner doesn’t have current data, they refuse the discount. We build loyalty platforms that handle millions of customers in real time.

Real-time processing: Points are credited at the moment of transaction, not in a nightly batch. Event-driven architecture — POS system sends event, loyalty engine processes rules (multipliers, bonus actions, tier upgrades), customer sees updated status immediately. Latency < 200ms end-to-end.

Rules engine: Flexible rules without deployment — marketing team sets up campaign (2× points on weekend, bonus for purchase over 1000 CZK, partner multiplier), system applies it immediately. Rule versioning, A/B testing campaigns, real-time performance reporting.

Partner integration: REST API for partners with OAuth2, rate limiting, webhook notifications. Partner queries point balance, applies discount, reports transaction. SLA per partner, monitoring per endpoint. Typically 10-50 partners, each with their own rules.

Scaling: Millions of customers, tens of millions of transactions monthly. Redis for hot data (current points, tier), PostgreSQL for history. Horizontal scaling — adding new partner doesn’t slow down existing ones. GDPR compliance — right to erasure, data portability, consent management.

loyaltyidentityreal-time
Detail →

Legacy Modernization

Strangler Fig pattern, gradual migration without big-bang rewrite. Every step brings value, no step breaks production. Zero downtime throughout.

Big-bang rewrite is gambling, not strategy. 70% of large rewrite projects fail or exceed budget by 2×+. We modernize gradually — strangler fig pattern, where new system gradually takes over functions from the old one. Every step is deployable, testable and rollbackable.

Our 7-step playbook: (1) Stabilization — monitoring, metrics, baseline. (2) Domain mapping — event storming, bounded contexts, dependency analysis. (3) API gateway — facade in front of legacy, routing rules. (4) First module isolation — smallest bounded context with fewest dependencies. (5) Data migration — CDC (Change Data Capture), dual-write, reconciliation. (6) Gradual rollout — canary releases, traffic shifting 5% → 25% → 50% → 100%. (7) Decommission — old module off, monitoring new one.

Data migration is the hardest part. Legacy systems have years of accumulated technical debt in data — inconsistent formats, missing validation, duplicates. We use CDC (Debezium) for real-time replication, reconciliation jobs for consistency verification, and rollback plan for every step.

Risk management: Every modernization step has defined success metrics, rollback plan and communication protocol. Feature flags separate deploy from release — code is in production, but feature is off until it passes validation. We never change more than one module at a time.

Real results: Typical modernization takes 6-18 months (depends on size). Brings value from first quarter — better monitoring, faster deployments, reduced operational costs. Total TCO typically drops by 30-50% due to eliminated legacy maintenance.

modernizationstrangler-figmigration
Detail →

DDD & Microservices

Domain-driven design, bounded contexts, event sourcing and CQRS. Architecture that scales technically and organizationally — teams work independently on their domains.

Microservices without DDD are distributed monolith. Splitting services by technical layers (auth service, notification service) instead of domains leads to tight coupling, distributed transactions and deployment hell. DDD defines service boundaries by business domains — each bounded context has clear responsibility.

Event Storming as discovery tool: Before we write a line of code, we do Event Storming with domain experts. 2-3 days of workshops where we map business processes, identify aggregates, bounded contexts and domain events. Output: system map understood by both business and tech.

Event Sourcing & CQRS: For domains where you need complete history (finance, audit, compliance), we use Event Sourcing — state is calculated from event history, not from last snapshot. CQRS separates reading from writing — read model optimized for queries, write model for business logic.

Organizational scaling: Conway’s Law says architecture copies organization. With DDD we flip it — bounded contexts define optimal team structure. Each team owns their context, their deployment pipeline, their SLO. Team Topologies in practice.

Anti-patterns we avoid: Shared database between services (tight coupling). Synchronous call chains (cascade failures). Too small services (nano-services overhead). God service that knows everything. Distributed transactions across 5+ services.

dddmicroservicescqrs
Detail →

Zero-downtime Deployments

Blue-green, canary releases, feature flags, progressive delivery. Deployment is a non-event — automated, tested, rollbackable. Your team deploys daily without stress.

Deployment shouldn’t be an event. If your team has a “deployment window” on Sunday night and everyone is on standby, something is wrong. We build deployment pipelines where push to main is routine — automated, tested, with instant rollback.

Blue-green deployment: Two identical environments. New version is deployed to “green”, tested, traffic is switched. If something fails, we switch back to “blue” in seconds. No downtime, no stress. For Kubernetes we use Argo Rollouts with automatic health checks.

Canary releases: New version gets 5% traffic. Automatic metrics (error rate, latency, business KPI) decide whether to continue to 25%, 50%, 100% — or rollback. Entire process automated, human intervention only on escalation. Typically 30-60 minutes from push to full rollout.

Feature flags: Separation of deploy from release. Code is in production, but feature is off. We turn on gradually — internal users → beta → 10% → 100%. A/B testing built-in. Kill switch on every feature. LaunchDarkly, Unleash, or custom solution based on needs.

Database migrations: Riskiest part of deployment. We use expand-contract pattern — first add new column (expand), then migrate data, then remove old (contract). No breaking changes, no table locks. Flyway/Liquibase with automatic rollback scripts.

deploymentcicdfeature-flags
Detail →
Mission-critical system

Mission-critical system

System whose outage directly stops operations and generates financial losses. Unlike internal tools where outage hurts, here outage costs money every minute.

Příklad z praxe: Sorting system in logistics hub processes 10,000 shipments/hour. 30-minute outage = 5,000 unprocessed shipments, delivery delays, SLA penalties.
  • Redundancy and automatic failover
  • Monitoring with alerting (< 2 min detection)
  • Runbooks for incident response
  • Zero-downtime deployment pipeline
99.95%
Availability (SLA)
<50ms
P95 API latency
10k+ req/s
Throughput
<15 min
MTTR

Jak to děláme

1

Process Analysis

We map key business processes and identify bottlenecks in existing systems.

2

Architecture Design

We define target state — modules, integrations, data model and migration strategy.

3

Iterative Development

We deliver in sprints with continuous testing and user feedback.

4

Migration & Go-live

Controlled transition from old system including data migration and user training.

5

Operations & Development

SLA-based support, monitoring and continuous development according to changing needs.

When You Need Core IS

Core IS pays off where the system directly controls operations and its outage costs real money.

Typical Situations

  1. Outage = revenue loss — System controls real-time operations: sorting lines, payment transactions, customer orders.
  2. Legacy can’t keep up with growth — Current system was supposed to handle 100 operations/s, now there are 10,000.
  3. Integration complexity — Dozens of systems, no governance, every change is a risk.
  4. Regulation and audit — Finance, healthcare, logistics — you need audit trail and compliance.

What We Deliver

Logistics IS & WMS

Receiving, warehousing, picking, shipping. Event-driven architecture with Apache Kafka. Our WMS systems control sorting hubs processing 10,000+ shipments/hour. We integrate with scanners, sorting lines, scales and printers. Edge processing ensures functionality even during connectivity outages.

Transactional Systems

Payment gateways, clearing, accounting cores. ACID guarantees, Saga pattern for distributed transactions, Outbox pattern for event delivery. Automatic failover with RTO < 30s and RPO = 0 for critical systems.

Loyalty Platforms

Loyalty engine, points systems, partner integrations. Real-time points processing at transaction (not nightly batch). Flexible rules engine for marketing campaigns without deployment. Millions of customers, dozens of partners.

Legacy Modernization

7-step playbook: stabilization → measurement → domain mapping → strangler fig → migration → optimization → operations. No big-bang rewrite. Every step brings measurable value and is rollbackable.

Modernization Playbook

We don’t go the big-bang rewrite route. We modernize gradually:

  1. Stabilization and measurement — Monitoring, metrics, baseline. Before you start changing, you need to know where you are.
  2. Domain mapping — Event Storming with domain experts. Bounded contexts, aggregates, domain events. 2-3 days of workshops that save months of wrong decisions.
  3. API gateway — Facade in front of legacy system. New services behind gateway, legacy behind gateway. Routing rules decide who handles request.
  4. First module isolation — Smallest bounded context with fewest dependencies. Strangler fig pattern in practice.
  5. Data migration — CDC (Debezium), dual-write, reconciliation. Riskiest step — therefore most thoroughly tested.
  6. Gradual rollout — Canary releases, traffic shifting 5% → 25% → 50% → 100%. Metrics decide on continuation.
  7. Operations — SRE processes, runbooks, on-call rotation, post-mortem culture.

Decision Matrix: Modernize vs. Rebuild

Factor Modernize (Strangler Fig) Rebuild from Scratch
Risk Low — gradual steps High — big bang
Time to value Months Years
Operational continuity Preserved Dual-run necessary
Costs Gradual, measurable High upfront
When to choose Most cases Only when legacy is unmaintainable

Architectural Principles

Domain-Driven Design — Bounded contexts define service boundaries. Event Storming as discovery tool. Ubiquitous language between business and tech team.

Event-Driven Architecture — Asynchronous communication via Apache Kafka. Event sourcing for domains requiring complete audit trail. CQRS for separating reads and writes.

Resilience Patterns — Circuit breakers (Polly, Resilience4j), retry with exponential backoff, bulkhead isolation, graceful degradation. System works even during partial outage.

Observability from Day 1 — Structured logging, distributed tracing (OpenTelemetry), business metrics. Alerting on SLO/SLI, not technical metrics. Runbooks for top 10 incidents.

Technology Stack

Layer Technologies
Backend C#/.NET 8, Python, Go
Database PostgreSQL, SQL Server, Redis, MongoDB
Messaging Apache Kafka, RabbitMQ
Orchestration Kubernetes (AKS/EKS), Docker
CI/CD GitHub Actions, GitLab CI, ArgoCD
Monitoring Grafana, Prometheus, Jaeger, ELK
Cloud Azure, AWS (multi-cloud ready)

Časté otázky

Most projects we build on existing foundations. Our modernization playbook is designed for gradual migration — strangler fig pattern, without big-bang rewrite.

Blue-green and canary deployments, automated tests, feature flags. Every release goes through staging environment with production-like data.

C#/.NET, Python, PostgreSQL, SQL Server, Redis, RabbitMQ/Kafka, Docker, Kubernetes, Azure, AWS. Architectural patterns: DDD, Event-driven, CQRS, Microservices.

Depends on scope. Typically: discovery (2-4 weeks) → MVP (3-6 months) → production. Price from 2M CZK for medium complexity system.

Multi-layer approach: redundant infrastructure (multi-AZ), automatic failover (RTO < 30s), health checks, circuit breakers, graceful degradation. Regular chaos engineering tests verify that failover actually works.

Yes. Multi-tenant architecture with per-tenant configuration, multi-currency, timezone handling, localization. Shared codebase, isolated data. Typically for retail and logistics operating in CEE region.

Máte projekt?

Pojďme si o něm promluvit.

Domluvit schůzku