6

HAproxy – Logging within docker container

Hey ,

So today we will continue looking at HAproxy – however this time we will be using Docker to host our load balancer. As far as it is no problem to just download the main image from docker hub and run instantly it does not give out of the box something that I was after …… the logs.

That’s why I went ahead and just created my own version of HAproxy which includes rsyslog. Repository with the image can be found on github.

In order to run the container we just need to execute the following commands:

  1. To get the most up to date image from my docker hub repo
    docker pull rafpe/docker-haproxy-rsyslog
  2. To start container ( assuming you have config file in current directory )
    docker run -it -d -P -v ${PWD}/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg rafpe/docker-haproxy-rsyslog

 

Once you do this container should be up and running and if you query for current docker containers you should see something similar to output below :

haproxy_rsyslog_logs

 

As you can see we are getting logs directly visible after querying with docker logs command.

In one of the future posts we will be investigating logs format customisations as well as features included in HAproxy since 1.6  which is log tags.

 

If you would have any problems configuring this because of missing config you can use sample below

global
    log 127.0.0.1 local2
    maxconn 2000
    pidfile /var/run/haproxy.pid

    tune.ssl.default-dh-param 2048

    # SSL ciphers
    ssl-default-bind-options no-sslv3 no-tls-tickets
    ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA



defaults
    mode    http
    option  httplog
    option  dontlognull
    option  forwardfor
    option  contstats
    option  http-server-close
    option log-health-checks
    retries 3
    option  redispatch
    timeout connect  5000
    timeout client  10000
    timeout server  10000

    # make sure log-format is on a single line
    log global
    log-format {"type":"haproxy","timestamp":%Ts,"http_status":%ST,"http_request":"%r","remote_addr":"%ci","bytes_read":%B,"upstream_addr":"%si","backend_name":"%b","retries":%rc,"bytes_uploaded":%U,"upstream_response_time":"%Tr","upstream_connect_time":"%Tc","session_duration":"%Tt","termination_state":"%ts"}


frontend http-in
    bind *:80
    
    # Default backend to be used
    default_backend will-be-back-soon


backend will-be-back-soon
   balance roundrobin

 

2

Logstash – Filtering Vyos syslog data

logstash-logoHey , So in last days/weeks 🙂 I work quite a lot with ELK stack. Especially in getting data from my systems into Elastic. There would not be any problem if not the fact that default parsing did not quite do work. But what would be IT life without challenges ?

So in this post I will explain in short how I have overcome this problem. And I’m sure you would be able to use this or event make it better.

We will look into following:

* Incoming raw data

* Creating filter

* Enjoying results

 

Incoming raw data:

So you got your vyos box doing the hard work on the edge of your network. And now you would like to have control when someone is knocking to your door or to find root cause when troubleshooting firewall rules.

Example of incoming data from my box looks similar to the following :

<4>Dec  6 01:36:00 myfwname kernel: [465183.670329] 
[internet_local-default-D]IN=eth2 OUT= 
MAC=00:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:00 
SRC=5.6.7.8 DST=1.2.3.4 LEN=64 TOS=0x00 
PREC=0x00 TTL=56 ID=10434 DF PROTO=TCP 
SPT=51790 DPT=80 WINDOW=65535 RES=0x00 SYN URGP=0

and if we just apply basic syslog filtering it will not give out all required fields. The problem we challenge here is we need to get out internet rule. Then following that we can see use case for key value filter.

 

Creating filter:

So time has come to use some magical skills of creating configuration for Logstash filter. I would like to put stress that using different approaches can have impact on performance. Both negative and positive. As yet I’m getting familiar with Logstash this might not be the best solution but I will definitely explore this.

You will notice that my filters use conditional statements so I do not process data unnecessary. In my case vyoss traffic is tagged as syslog and contains specific string in the message.

So without further bubbling … We begin with parsing out data from the message that will get for sure extracted.

  if [type] == "syslog" and [message] =~ "myfw" {
    grok {
      break_on_match => false
      match => [
      "message",  "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: \[(?<syslog_pid>.*?)\] %{GREEDYDATA:syslog_message}",
      "syslog_message", "\[(?<firewall_rule>.*?)\]"
      ]
    }

Points of interest here :

  • Grok filter DOES NOT break on match
  • We do match on message and further on extracted syslog_message to get our firewall rule from [ ]

Next we will do changes on fly to our fields using mutate

    # mutate our values
    mutate {
      add_field => [ "event_type", "firewall" ]
      rename => { "firewall_rule" => "[firewall][rule]" }
      gsub => [ "message", "= ", "=xxx" ]            # Here we remove scenario where this value is empty
    }

Points of interest :

  • I add a field event_type  called firewall so in future I would be able to quickly query for those events.
  • I rename my previous field ( firewall_rule ) to nested field
  • And lastly I use gsub to mitigate problem of missing values in key pair

Once this is done I extract remaining values using kv filter which is configured as follow :

    # Apply key value pair
    kv {
      include_keys => ["SRC","DST","PROTO","IN","MAC","SPT","DPT"]
      field_split => " \[\]"
      add_field => {
        "[firewall][source_address]" => "%{SRC}"
        "[firewall][destination_address]" => "%{DST}"
        "[firewall][protocol]" => "%{PROTO}"
        "[firewall][source_port]" => "%{SPT}"
        "[firewall][destination_port]" => "%{DPT}"
        "[firewall][interface_in]" => "%{IN}"
        "[firewall][mac_address]" => "%{MAC}"
      }
    }

Points of interest :

  • I use include_keys so only fields in array will be extracted ( positive impact on performance )
  • I tried field_split to help out with one of previous challenges but that did not make a lot of difference
  • And lastly I specify my new nested fields for extracted values

 

So thats it! The complete file looks following :

filter {
  if [type] == "syslog" and [message] =~ "vmfw" {
    grok {
      break_on_match => false
      match => [
      "message",  "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: \[(?<syslog_pid>.*?)\] %{GREEDYDATA:syslog_message}",
      "syslog_message", "\[(?<firewall_rule>.*?)\]"
      ]
    }
    
    # mutate our values
    mutate {
      add_field => [ "event_type", "firewall" ]
      rename => { "firewall_rule" => "[firewall][rule]" }
      gsub => [ "message", "OUT= MAC=", "MAC=" ]            # Here we remove scenario where this value is empty
    }

    # Apply key value pair
    kv {
      include_keys => ["SRC","DST","PROTO","IN","MAC","SPT","DPT"]
      field_split => " \[\]"
      add_field => {
        "[firewall][source_address]" => "%{SRC}"
        "[firewall][destination_address]" => "%{DST}"
        "[firewall][protocol]" => "%{PROTO}"
        "[firewall][source_port]" => "%{SPT}"
        "[firewall][destination_port]" => "%{DPT}"
        "[firewall][interface_in]" => "%{IN}"
        "[firewall][mac_address]" => "%{MAC}"
      }
    }
  }
}

 

Enjoying results:

We now would need to test this if it really works as we expect it to work. For this to check we of course will use Docker

First I will create the afore mentioned config file and name it logstash.conf . Once thats done we bring up container up with the following command :

docker run -d -p 25666:25666 -v "$PWD":/config-dir logstash logstash -f /config-dir/logstash.conf

This creates container for me which I can then test locally. Now for this to work you need input source ( i.e. tcp / udp and stdout i.e. codec ruby )

Then I will split my screen using tmux and will execute request while looking at results from docker logs

Logstash-vyos-local

 

And thats it! You have beautifully working parsing for your vyos box ! If you have any comments / or improvements – feel free to share!

 

 

2

Docker compose and ELK – Automate the automated deployment


This post contains information which have been updated in post

docker ELK with compose v2

However to get idea of how solution works I recommend just reading through 🙂


Hello!

Its been a long time when it was a bit quiet here however there was a lot that I was busy with. And as you know its in majority of scenarios the time we are short on 🙂  Today I wanted to share with you update to one of my previous posts where we setup ELK in automated way

When I originally finished the post it of coourse ‘was working like a charm!” and then I just left it for a while and focused on couple of other projects. And recently I visited that page back as I wanted to quickly deploy ELK stack for my testing…. and then suprise – it does not work ?! Of course IT world is like a super speed train 🙂 and seems like I just stopped on a station and forgot to jump back there 🙂

So from my perspective it was a great opportunity to craft some extra bash skillks and refresh knowledge about ElasticSearch , Logstash and Kibana.

So what’s changed ?

First of all there is now one major script which gets the job done. the only thing you need to do is to specify a cluster name for elasticsearch.

Also I have added some folder existance checking so it doesnt come with dummy error msgs that folders do exist already.

How to run it now ?

Start by downloading script locally to folder under which we will create remaining folders for our components

curl -L http://git.io/vBPqC >> build_elk.sh

The -L option is there for purposes of followiing redirect (as thats what git.io is doing for us )

 

Once done you might need to change it to executable

sudo chmod +x build_elk.sh

 

And thats all 🙂 last thing to do is to execute the script with first argument being desired name of your elasticsearchcluster. Output is almost instant and promising 🙂

[email protected]~$ sudo ./build_elk.sh myelastico
Cloning into '/home/bar/compose/elk_stack'...
remote: Counting objects: 26, done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 26 (delta 7), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (26/26), done.
Checking connectivity... done.
Cloning into '/home/bar/logstash/central'...
remote: Counting objects: 17, done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 17 (delta 4), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (17/17), done.
Checking connectivity... done.
Cloning into '/home/bar/logstash/agent'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 1), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (8/8), done.
Checking connectivity... done.
Cloning into '/home/bar/elasticsearch/config'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (8/8), done.
Checking connectivity... done.
Creating redis-cache
Creating elasticsearch-central
Creating logstash-central
Creating kibana-frontend
Creating logstash-agent

 

Lets check docker deamon if our containers are indeed running …

[email protected]:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                NAMES
ff4f41753d6e        logstash:latest     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:25827->25827/udp, 0.0.0.0:25827->25827/tcp   logstash-agent
be97a16cdb1e        kibana:latest       "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:5601->5601/tcp                               kibana-frontend
d3535a6d9df8        logstash:latest     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:25826->25826/tcp, 0.0.0.0:25826->25826/udp   logstash-central
38b47ffbb3e7        elasticsearch:2     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp       elasticsearch-central
100227df0b50        redis:latest        "/entrypoint.sh redis"   2 minutes ago       Up 2 minutes        0.0.0.0:6379->6379/tcp                               redis-cache
[email protected]:~$

 

They all do 🙂 thats took less than second (altough I had the images already on my host … ) and if we just check browser ?

kibana_2

 

 

And if anything changes ? Well then this is all in git … 🙂 so just pull for changes and you will defenitely get the most up to date version. But maybe you have some suggstions or improvements ? Then just push them – I’m sure it would be beneficial 🙂

 

Below is the view on the gIst 🙂

0

Running ElasticSearch/Kibana and Logstash on Docker

In todays world if you are new to the combination of words in subject means you need to quickly catch up 😀 In IT world Docker is introducing new way to how we operate. Days when you need 20 sysadmins to make deployment successful are now long gone. You could say nowadays we get DevOps that with a click of a button change the world 😀

Today we will discuss how with Docker running  ElasticSearch + Logstash and Kibana you can visualise your environment behaviour and events. At this stage i would like to point out that this can be useful not only in IT where you get insights to what is going on with your infrastructure but also it has a great potential in the era of IoT . In a single “go” you will build required components to see its potential.

Since this will be only touching real basics I will try to point you to more interesting sources of information.

The whole excercise will be done on host running Ubuntu with the following version installed

Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty

I already have followed Docker docs on installing Docker engine on this OS. So make sure you have the engine installed.

As a quick verification this is verison of Docker running during writeup of this post

Client:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

 

So since we got that ready let’s fire up an instance of ElasticSearch. Since we would like to store data outside of the container we need to make folder somewhere on the host. Since this is only non-production excercise I will just use simple folder in root structure. For this purposes I have created folder called cDocker and within there created subfolder data/elasticsearch . This can be achieved by running the following in console

sudo mkdir -p cDocker/data/elasticsearch

Once ready we can kick off creation of our container

sudo docker run -d --name elasticsearch -p 9200:9200 -v cDocker/data/elasticsearch:/usr/share/elasticsearch/data elasticsearch

After a moment of pulling all required image layers we can see container running on our Docker host

docker_elasticSearch_createdok

 

For communicating with API you can see we have exposed port 9200. For ease of making API calls I will be using Postman addon for Chrome. With that we wlll send GET request to address of http(s)://<IP>:9200/_status which in return should come back with our instance status. In my case everything works out of the box so reply looks following

elasticsearch_api_status_ok

 

For the next part we will create LogStash container. We do this by creating container based on LogStash image. The main difference here is that we will link our elasticsearch container so they will be able to talk to each other.

docker run -d --name logstash -p 25826:25826 -p 25826:25826/udp -v $(pwd)/conf:/conf --link elasticsearch:db logstash logstash -f /conf/first.conf

In the above we expose port 25826 TCP/UDP and mount volume for configuration (here I use $(pwd) for existing folder in my current console session ) . Next we link our elasticsearch container and give it db alias.  Remaining is the name of the image and initial command to be executed.

Now if you paid close attention i specified that we will be using config file called first.conf since that file does not exist we must create it. Contents of those file come directly from Logstash documentation and are real basic configuration enabling us to see working solution

Now if I open 2 session windows – one to tail logstash container logs and other one to create a telnet connection to 25826 then we will see message I type into telnet session will get translated and forwarded to elasticsearch.

logstash_testmessage_ok

 

Of course this kind of configuration in this instance is only good for excercise and shows quickly how nicely we can get the system running

So since thats ready it’s time to set up Kibana. Its quite easy using the default image from Docker Hub . I have choosen to link containers for ease of this excercise

docker run --name kibana -e --link elasticsearch:elasticsearch -p 5601:5601 -d kibana

And now seconds later we can login to our Kibana server and take a look on our forensic details 🙂 The message we sent before as a test message is already visible! How cool is that 😀 ?

 

 

kibana_firstevent_test

Lets add some extra fake messages to make some visualisation of it. I will be doing that using telnet command and sending some dummy messages to logstash

After thats done 🙂 we can then create visualizations – and from there onwards … so awesome dashboards. For purposes of this excercise I have just created basic pie charts to show you how it can look like. Of course there is much more power there and you should explore resources available for this if you want to do more 😀

kibana_firstdashboardtest

 

Well that concludes this short introduction to logging with ELK stack. There are of course a lot of other considerations when setting this up for production. Using redis to avoid bottleneck with lost messages / avoid complex message parsing etc. We will try to look into some of those in upcoming posts!

 

 

0

PowerShell – using Nlog to create logs

If you are after a logging framework I can recommend you one I have been using not only in Windows but also in C# development for web projects. Its called NLOG and it is a quite powerfull , allowing you to log not only in specific format or layout – but also do the things to have reliable logging ( by having i.e. multiple targets with failover ) with required performance ( i.e. Async writes ). Thats not all! Thanks to out of the box features you can log to flat files , databases , network endpoints , webapis … thats just great!

The Nlog is available in GitHub here so I can recommend that you go there and get your self familiar with the Wiki explaining usage and showing some examples.

At this point of time I can tell you that you can use XML config file either configure logger on the fly before creation. In this post I will show you the both options so you would be able to choose best.

 

The high level process looks following :

  1. Load assembly
  2. Get configuration ( or create it )
  3. Create logger
  4. Start logging

 

Nlog with XML configuration file

The whole PowerShell script along with configuration module looks following :

Now the thing that can be of interest of you .. the waywe load our assembly. What I use here is loading of byte array and then passing that as parameter to assembly load method.

$dllBytes = [System.IO.File]::ReadAllBytes( "C:\NLog.dll")
[System.Reflection.Assembly]::Load($dllBytes)

Reason to do this that way is to avoid situations where we would have the file locked by ‘another process’. I had that in the past and with this approach it will not happen 🙂

 

The next part with customized data – is used when we would like to pass custom fields into our log. The details are described here on Nlog page

 

After that I’m loading configuration and assigning it

$xmlConfig                       = New-Object NLog.Config.XmlLoggingConfiguration("\\pathToConfig\NLog.config")
[NLog.LogManager]::Configuration = $xmlConfig

 

Nlog with configuration declared on the fly

As promised it might be that you would like to use Nlog with configuration done on fly instead of centralized one. In the example below I will show you file target as one of the options. There is much more so yu may want to explore remaining options

    # Create file target
    $target = New-Object NLog.Targets.FileTarget  

    # Define layout
    $target.Layout       = 'timestamp=${longdate} host=${machinename} logger=${logger} loglevel=${level} messaage=${message}'
    $target.FileName     = 'D:\Tools\${date:format=yyyyMMdd}.log'
    $target.KeepFileOpen = $false
    
    # Init config
    $config = new-object NLog.Config.LoggingConfiguration

    # Add target 
    $config.AddTarget('File',$target)

    # Add rule for logging
    $rule1 = New-Object NLog.Config.LoggingRule('*', [NLog.LogLevel]::Info,$target)
    $config.LoggingRules.Add($rule1)

    # Add rule for logging
    $rule2 = New-Object NLog.Config.LoggingRule('*', [NLog.LogLevel]::Off,$target)
    $config.LoggingRules.Add($rule2)

    # Add rule for logging
    $rule3 = New-Object NLog.Config.LoggingRule('*', [NLog.LogLevel]::Error,$target)
    $config.LoggingRules.Add($rule3)

    # Save config
    [NLog.LogManager]::Configuration = $config

    $logger = [NLog.LogManager]::GetLogger('logger.name')

 

Engineers…. Start your logging 🙂

Once done not much left 😀 you can just start logging by typing :

$logger_psmodule.Info('some info message')
$logger_psmodule.Warn(('some warn message')
$logger_psmodule.Error(('some error message')