Installing a logging server

By Roger, Sat 08 October 2016, in category Software

elasticsearch, install, kibana, logging, logstash

The more software and hardware you got, the more logging to check. And it was a lot more than I could bare. So I decided to build a logging server with a Logstash as a collector for all the data, Elasticsearch as the central storage and kibana for the visualisation. Maybe it's a little overkill, but if you can do it like this, you can do it in a big datacenter also.


Elasticsearch has a very high memory consumpion. Elasticsearch needs at least 2GB of ram. So put 4GB in your virtual, or more if you got a lot of data. Also watch your diskspace. As time goes by, it can consume a lot of data!

I followed the guide on logstash documentation and Combine those two and a lot of googeling for an installation.

The installation is done on a clean Debian server, version Jessie. Because we are using some newer software, I added the backports repository. Put in /etc/apt/sources.list:

# Backports
deb jessie-backports main

Next we add the Elasticsearch repo.

wget -qO - | apt-key add -
apt-get install apt-transport-https
echo "deb stable main" | tee -a /etc/apt/sources.list.d/elastic-5.x.list
apt-get update

Now we can install the software.

apt-get install openjdk-8-jre
apt-get install logstash elasticsearch kibana



Edit the configurationfile /etc/elasticsearch/elasticsearch.yml and make sure Elasticsearch is only listing for local connections or the interfaces you want it to listen. All the other options are for you to decide. ['', '']


Edit the configurationfile /etc/kibana/kibana.yml and change to your listening IP adres. I also disabled the logging. It was very busy in my syslog with JSON messages. I don't need them. But for debugging you can always remove the last line. ""
logging.quiet: true


You can write a book about logstash, but I will only show what I did to collect my syslog data. Make a new file: /etc/logstash/conf.d/syslog.conf. In it is the syslog collector and a filter if you are using the Debian package 'snoopy' packages, like me. It will make new fields with audit logging.

input {
  syslog {
    port => 5000
    type => "syslog"

filter {
  if [type] == "syslog" {
    if [program] == "snoopy" {
      grok {
        match => ["message", "\[uid:%{INT:uid:int} sid:%{INT:sid:int} tty:%{DATA:tty} cwd:%{DATA:cwd} filename:%{DATA:filename}\]: %{GREEDYDATA:command}"]

That's the input, now we need something to output it to. Make a new file, /etc/logstash/conf.d/elastic_search.conf:

output {
  if !("_grokparsefailure" in [tags]) {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "%{type}-%{+YYYY.MM.dd}"
  if "_grokparsefailure" in [tags] {
    file {
      path => "/var/log/logstash/grokerror"

This will send all the data to elastic search. If something goes wrong with grok, and beleave me, it will, there's a logfile where you can find the messages. That message will not send to your Elasticsearch server.


Start everything. And take some time between starting the services. It's java and not memory friendly...

service elasticsearch start
service kibana start
service logstash start

Have fun in your kibana interface at Take some time to make yourself at home. You can find a lot of documentation and tutorials at the website of Elasticsearch.