Skip to content
_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN DE
Let's talk

200M Transaction Migration for a Financial Institution

Leading Czech bank

200M+
Records migrated
0
Service downtime
60%
Cost reduction
Query speedup

The client is one of the largest financial institutions in the Czech Republic. Decades of running on Oracle databases meant rising licence costs, scalability limitations, and increasingly complex infrastructure maintenance. IT leadership decided on a strategic migration of the transaction store to Azure Cosmos DB — a fully managed NoSQL database designed for global distribution and elastic performance.

Our task was to design and execute a complete migration strategy for 200 million historical transaction records, with absolutely zero downtime for end users.

Challenge

Data volume and complexity

The transaction database contained 200 million records accumulated over more than 15 years. The data included:

  • Payment transactions — card payments, transfers, direct debits with complex relationships to accounts, clients, and merchants
  • Historical records — regulatory requirement to retain complete transaction history for 10 years
  • Reference data — interconnections with dozens of other systems (CRM, AML, reporting)
  • Indexes and views — hundreds of SQL views and stored procedures created over years of operation

A direct migration was not feasible. The Oracle data model (relational, normalized) fundamentally differed from the Cosmos DB target model (document-based, denormalized). Every record had to be transformed, enriched, and validated.

Zero downtime

The financial system processes transactions 24/7. Any outage would have a direct financial impact on millions of clients. The requirement was unambiguous: the migration must proceed without any service interruption, without performance degradation, and without data loss.

Data consistency

In a banking environment, every cent must balance. Any discrepancy between source and target systems would be unacceptable. We needed a mechanism to ensure 100% data consistency during and after the migration.

Solution

Custom CDC pipeline

We designed a custom Change Data Capture (CDC) pipeline built on Apache Kafka that captured all changes in the Oracle database in real time and replicated them to Cosmos DB:

  1. Oracle LogMiner integration — reading redo logs to capture every change without impacting production workload
  2. Kafka Connect — reliable change event transport with guaranteed delivery (exactly-once semantics)
  3. Transformation layer — Python microservices transforming the relational model into the Cosmos DB document model
  4. Cosmos DB writer — optimized bulk writer with retry logic and backpressure management

The pipeline processed an average of 50,000 changes per minute with end-to-end latency under 500ms.

Parallel validation

A key element of the migration was a parallel validation system that continuously compared data across both systems:

  • Checksum validation — ongoing hash comparison of data blocks between Oracle and Cosmos DB
  • Business rule validation — automated consistency checks (balance totals, transaction counts, aggregates)
  • Sample-based deep validation — random sampling and detailed comparison of individual records
  • Reconciliation engine — automatic identification and correction of discrepancies

The validation system ran continuously throughout the migration, generating daily consistency status reports.

Rollback strategy

A detailed rollback plan existed for every phase of the migration:

  • Phase 1 — historical data — one-way migration with the option to revert to Oracle backup
  • Phase 2 — dual-write — both systems accept writes in parallel; switching reads is instantaneous
  • Phase 3 — cutover — Cosmos DB becomes the primary system; Oracle remains as a read-only fallback for 30 days

In case of any issue, switching back to Oracle was possible within 5 minutes.

Phased migration

The migration proceeded in 4 waves over 18 months:

  1. Wave 1 — historical transactions older than 5 years (80M records) — low risk, access pattern validation
  2. Wave 2 — transactions 1–5 years old (70M records) — moderately active data, performance validation
  3. Wave 3 — transactions under 1 year (40M records) — active data, dual-write activation
  4. Wave 4 — live cutover (10M active records) — switch to Cosmos DB as the primary system

Results

60% cost reduction

Eliminating Oracle licences and moving to the Cosmos DB pay-per-request model delivered annual savings of 60% compared to the previous state. Elastic performance means the client pays only for actually consumed resources.

3x faster queries

The document model optimized for application access patterns delivered dramatic speedups. Queries that took hundreds of milliseconds on Oracle now run under 30ms. Aggregation queries for reporting improved 5x.

99.99% uptime during migration

Over the entire 18-month migration, there was zero service downtime. The bank’s clients experienced no change in availability or performance. All switching occurred transparently.

Foundation for modernization

The new data platform on Cosmos DB opened the path for further modernization — real-time analytics, event-driven architecture, and global data distribution for future expansion.

Technologies

OracleAzure Cosmos DBAzure Data FactoryKafkaPython.NETTerraformKubernetes

Want similar results?

We'll show you how.

Schedule a meeting