_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN
Let's talk

Continuous Integration with Jenkins — how we stopped deploying on Friday evenings

18. 06. 2013 4 min read CORE SYSTEMSai
Continuous Integration with Jenkins — how we stopped deploying on Friday evenings

Just a year ago our release process looked like this: a developer said “it works on my machine”, operations received a WAR file on a shared drive, deployed it to production on Friday at 6 PM and kept their phone at hand all weekend. Today we have Jenkins, automated builds and deployment to the test environment after every commit. And on Friday evenings we go for a beer. Here’s that story.

Why Jenkins (and not Bamboo, TeamCity, …)

The choice of CI tool was surprisingly simple. TeamCity is great, but the license for a larger team isn’t cheap. Bamboo fits well if you live in the Atlassian ecosystem (and we do — JIRA, Confluence, Stash). But Jenkins is free, has a huge plugin ecosystem and — most importantly — our ops team had experience with it from previous projects.

We installed Jenkins on a dedicated RHEL server (4 CPU, 16 GB RAM — CI needs more than you’d expect). Master-slave architecture with two build agents: one for Java projects (Maven), another for .NET things (MSBuild). The entire setup took two days including integration with Stash (git webhooks) and JIRA.

Pipeline: from commit to test environment

Our build pipeline is simple but covers what matters:

  1. Checkout — Jenkins pulls code from Stash (git) after every push
  2. Build — Maven clean install, compilation and packaging
  3. Unit tests — JUnit, ~1200 tests, run in 4 minutes
  4. Static analysis — FindBugs, PMD, Checkstyle (via Sonar)
  5. Deploy to DEV — automatic WAR deployment to GlassFish
  6. Integration tests — Selenium tests against the DEV environment
  7. Deploy to TEST — manual trigger (a click in Jenkins)

The full pipeline from commit to integration tests takes about 15 minutes. That’s a huge improvement over the state of “we build once a week and pray.”

SonarQube — a quality gate that blocks bad code

SonarQube (Sonar back then) was a game changer for code quality. We set up a quality gate: if new code has coverage below 70%, contains critical bugs or security vulnerabilities, the build fails.

The first week had a lot of red builds. Developers complained. But after a month they got used to writing tests and the code improved significantly. Technical debt in Sonar drops every sprint, which makes for a great graph at the sprint retrospective.

Deployment automation — halfway there

We deploy to DEV and TEST automatically. But production deployment is still a manual process with change management, approval and a rollback plan. And honestly — for our clients in regulated environments that’s probably how it will stay. An auditor doesn’t want to hear that a robot deploys to production.

The deployment script is a simple Bash script: stops the GlassFish domain, copies the new WAR, starts it and verifies the health check URL. It’s not Puppet or Chef — it’s a script. But it works and the ops team understands it. Sometimes simplicity is more valuable than elegance.

Git workflow — feature branches and pull requests

Along with CI we also switched from SVN to Git (Atlassian Stash). And with that came feature branches and pull requests. Every feature is developed in its own branch, a pull request is created when complete, a colleague does a code review and merges into develop.

At first there was resistance — “why do we have to do code reviews, it’s a waste of time.” After a month, code reviews had caught two security issues and three performance bugs. Now we can’t imagine merging without a review.

Problems we encountered

Flaky tests. Selenium tests are notoriously unstable. A test passes locally, fails on CI. Solution: explicit waits instead of Thread.sleep, isolated test data, clean browser sessions. We still have a ~5% flaky rate, but that’s better than the 30% we started with.

Build time. 15 minutes is OK, but as the project grows it will get worse. We’re considering parallelizing Maven modules and possibly switching to Gradle, which is faster for incremental builds.

Environment parity. “It works on DEV, not on TEST” is a frustration CI doesn’t fully solve. But it helps — at least you know the build artifact is identical. The problem is environment configuration, which we still handle manually. Maybe one day we’ll try Puppet or Chef.

Numbers that speak for themselves

  • Release cycle: from 4 weeks to 2 weeks (and trending toward 1)
  • Production incidents after release: from ~5 to ~1 per quarter
  • Average time to fix a bug: from 3 days to 4 hours
  • Developer feedback loop: from “in a week” to 15 minutes

Start simple

You don’t need the entire DevOps stack from day one. Install Jenkins, connect it to git, add automated builds and tests. That alone will change the way your team works. The rest — deployment automation, infrastructure as code, monitoring — add gradually, when you’re ready. CI is a journey, not a destination.

cijenkinsautomatizacetesting
Share:

CORE SYSTEMS

Stavíme core systémy a AI agenty, které drží provoz. 15 let zkušeností s enterprise IT.

Need help with implementation?

Our experts can help with design, implementation, and operations. From architecture to production.

Contact us