Skip to content
_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN DE
Let's talk

ELK Stack — centralised logging

22. 07. 2015 Updated: 24. 03. 2026 1 min read CORE SYSTEMSdevelopment
This article was published in 2015. Some information may be outdated.
ELK Stack — centralised logging

Ten servers, thirty services. Finding an error in the logs means SSH-ing to every server and grepping. The ELK stack centralises logs in one place and makes sense of them.

Three pillars

Elasticsearch: Distributed search engine for JSON documents. Logstash: Pipeline for parsing and enriching logs. Kibana: Visualisations and dashboards.

Logstash configuration

input { beats { port => 5044 } }
filter {
  grok {
    match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{JAVACLASS:class} - %{GREEDYDATA:msg}" }
  }
}
output {
  elasticsearch { hosts => ["elasticsearch:9200"] index => "logs-%{+YYYY.MM.dd}" }
}

Structured logging

Best practice: log directly in JSON. No grok parsing needed. In Java with logstash-logback-encoder. Daily indices, deleted after 30 days. ES heap max 50% of RAM, never more than 32 GB.

ELK is the foundation of observability

Centralised logging is a necessity for distributed systems.

elkelasticsearchlogstashkibana
Share:

CORE SYSTEMS

We build core systems and AI agents that keep operations running. 15 years of experience with enterprise IT.

Need help with implementation?

Our experts can help with design, implementation, and operations. From architecture to production.

Contact us
Need help with implementation? Schedule a meeting