_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN
Let's talk

ELK Stack — centralised logging

22. 07. 2015 1 min read CORE SYSTEMSdevelopment
ELK Stack — centralised logging

Ten servers, thirty services. Finding an error in the logs means SSH-ing to every server and grepping. The ELK stack centralises logs in one place and makes sense of them.

Three pillars

Elasticsearch: Distributed search engine for JSON documents. Logstash: Pipeline for parsing and enriching logs. Kibana: Visualisations and dashboards.

Logstash configuration

input { beats { port => 5044 } }
filter {
  grok {
    match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:level} %{JAVACLASS:class} - %{GREEDYDATA:msg}" }
  }
}
output {
  elasticsearch { hosts => ["elasticsearch:9200"] index => "logs-%{+YYYY.MM.dd}" }
}

Structured logging

Best practice: log directly in JSON. No grok parsing needed. In Java with logstash-logback-encoder. Daily indices, deleted after 30 days. ES heap max 50% of RAM, never more than 32 GB.

ELK is the foundation of observability

Centralised logging is a necessity for distributed systems.

elkelasticsearchlogstashkibana
Share:

CORE SYSTEMS

Stavíme core systémy a AI agenty, které drží provoz. 15 let zkušeností s enterprise IT.

Need help with implementation?

Our experts can help with design, implementation, and operations. From architecture to production.

Contact us