Our main product — an information system for an insurance company — had grown to a size that made rapid deployment of changes impossible. A single deployment took hours, and one bug in the billing module brought down the entire system. We decided to split the monolith into microservices. Here are our experiences from the first few months.
What a monolith is and why it stops being enough¶
A monolithic application is a single deployable artefact — in our case, a WAR file over 400 MB in size. It contained everything: contract management, billing, reporting, notifications, and imports from external systems. Every change required building the entire artefact, regression tests ran overnight, and deployment meant downtime.
The development team grew — from five to twenty people. Merge conflicts became daily routine. Every sprint ended in “integration hell” — three days when everyone was trying to get their changes into the main branch without breaking their colleagues’ work.
Microservices: the single responsibility principle¶
The idea behind microservices is simple: each service does one thing and does it well. It has its own database, its own API, its own deployment cycle. It communicates with others through a clearly defined interface — typically a REST API or messaging.
In March 2014, Martin Fowler and James Lewis published the definitive article on microservices, summarising what Netflix, Amazon, and Spotify had been practising for years. We started experimenting on a smaller project — the notifications system.
First steps: extracting the notification service¶
The notification module was an ideal candidate for extraction. It had a clear interface (receive an event → send an email/SMS), minimal dependencies on the rest of the system, and its own data model. We created a standalone Spring Boot application with a REST endpoint:
@RestController
@RequestMapping("/api/v1/notifications")
public class NotificationController {
@Autowired
private NotificationService service;
@PostMapping
public ResponseEntity<Void> send(
@RequestBody NotificationRequest request) {
service.dispatch(request);
return ResponseEntity.accepted().build();
}
}
Instead of calling a class directly, the monolith now sent an HTTP POST to the notification service. We added the circuit breaker pattern (Netflix’s Hystrix library) — if the notification service did not respond within 2 seconds, the system continued without the notification and queued it for retry.
Challenges we encountered¶
Distributed transactions¶
In a monolith, a single database transaction was enough. Across services, that approach does not work. We switched to an eventual consistency model — the system is not instantly consistent, but it converges within a short period. For business processes that required atomicity, we implemented the Saga pattern — a chain of compensating actions.
Service discovery¶
Where is the notification service running? On which port? Hardcoded URLs in configuration files quickly proved to be a nightmare. We deployed Netflix Eureka as a service registry — each service registers itself on startup and others query Eureka to find out where to send requests.
Monitoring and logging¶
In the monolith, one log file was enough. With microservices, we have dozens of logs on different servers. We introduced centralised logging via the ELK stack (Elasticsearch, Logstash, Kibana) and a correlation ID — a unique request identifier that travels through all services.
Testing¶
Unit tests remained straightforward. Integration tests, however, became more complex — you need dependent services to be running. We introduced Consumer-Driven Contract Testing using the Pact library: each service defines a contract for its API and consumers validate it in their own tests.
Technology stack¶
- Spring Boot — quick start for a new service, embedded Tomcat, auto-configuration
- Netflix OSS — Eureka (discovery), Hystrix (circuit breaker), Ribbon (load balancing)
- RabbitMQ — asynchronous messaging between services
- PostgreSQL — each service has its own schema or an entire database
- ELK Stack — centralised logging and monitoring
- Jenkins — CI/CD pipeline for each service independently
Results after six months¶
We extracted five services from the monolith: notifications, reporting, data import, authentication, and document management. Deploying a single service takes minutes instead of hours. The team is split into squads — each squad owns 1–2 services end-to-end.
Deployment frequency rose from one deployment per month to several per week for each service. A bug in reporting no longer means the entire system goes down — other services keep running.
When microservices do not make sense¶
Microservices are not a silver bullet. For a small team (2–3 developers) the overhead is enormous — you are operating a distributed system with all that entails. If you do not have clear boundaries between domains, you will end up with a “distributed monolith” — the worst of both worlds.
Our recommendation: start with a monolith, but with a clean modular structure. When you feel the pain — slow deployments, organisational friction, the need to scale only part of the system — then extract.
Microservices are an organisational as well as a technical change¶
Transitioning to microservices is not just about technology — it is a change in the way the team works. Conway’s Law holds: the architecture of a system reflects the structure of the organisation. If you want independent services, you need independent teams.
For us, it was the right move. But we would do it again only on the condition that we have a sufficiently large team and clear domain boundaries.
Need help with implementation?
Our experts can help with design, implementation, and operations. From architecture to production.
Contact us