0

HA Nginx in Docker with keepaliveD

Do you need to create HA proxy and thinking of Nginx ? Or maybe thinking even further … about Nginx and docker ? So something really simple and what you can defenitely take to next level.

In my scenario I need something which you could call … hmmm a “service gateway” ?! Which i a nutshell is solution which exposes HA loadbalancer ( and in future also DNS with connection to Consul ).

Raw and basic design could look as follow :

keepalliveD - basic design

So what we have here are 2 hosts that will expose a VIP address. So nothing really edge cutting right 😀 And as recently I work a lot with Ubuntu following steps are geared towards that OS.

Installing:

installation is really plain simple. You can use APT to get package installed by running:

apt-get update && apt-get install keepalived

 

And thats it for installation part 🙂 nothing like quick install 🙂

 

Configuring:

Configuration is something on which you can spend some more time tuning it to your needs. It does have a lot of options and I recommend just do a bit of reading. I will highlight here only bare metal basics to get you running. But complex scenarios are well possible.

Also you will notice that I do not use multicasting but switched to unicast

 

First thing which you want to configure is binding settings ( otherwise we want to be able to get solution running )

echo "net.ipv4.ip_nonlocal_bind=1" >> /etc/sysctl.conf
sysctl -p

 

Next we create configuration file for our service

vi /etc/init/keepalived.conf

 

and once thats done you can paste contents of the snipet below

description "lb and ha service"

start on runlevel [2345]
stop on runlevel [!2345]

respawn

exec /usr/local/sbin/keepalived --dont-fork

 

Once done  we create config file /etc/keepalived/keepalived.conf ( on our first node )

vrrp_instance VI_1 {
        interface eth0
        state MASTER
        virtual_router_id 91
        priority 100
        virtual_ipaddress {
            # YOUR VIP ADDRESS # 
        }
        unicast_src_ip #YOUR-1st-NODE-ADDRESS
        unicast_peer {
         #YOUR-2nd-NODE-ADDRESS
        }
}

 

And on the other machine you place the same config but switch addresses in unicast source and peer

vrrp_instance VI_1 {
        interface eth0
        state MASTER
        virtual_router_id 91
        priority 100
        virtual_ipaddress {
            # YOUR VIP ADDRESS # 
        }
        unicast_src_ip #YOUR-2nd-NODE-ADDRESS
        unicast_peer {
         #YOUR-1st-NODE-ADDRESS
        }
}

 

More details on configuration you can find here >>>  http://www.keepalived.org/LVS-NAT-Keepalived-HOWTO.html ( I have found this link to be full of useful information )

 

Bringing service to live:

Now you might say that I have configured both as masters. But in this case first one to be online will become master.

Now on both nodes you can execute

# Start service
service keepalived start
# show ip addresses
ip addr show eth0

And on one you should see your VIP address being online. voilla! HA service running and operational

 

Testing Nginx:

Now time has come to test nginx. for purposes of this demo I have setup both machines to host docker container of nginx

HA-nginx-docker-keepalived

 

Great! So both are listening on correct VIP address. One is displaying message “Hello from Nginx-1” and second “Hello from Nginx-2”. Lets test that from client machine …

Initial request from our client machine :

Screenshot 2015-12-13 18.14.27

 

And let me know disable network interface on host-1 and once thats done we make request again

Screenshot 2015-12-13 18.17.45

 

 

The error you see here is kinda my fault (but wanted to highlight this ) since my keepaliveD service was stopped on the host. once I started the service it responded from the other host.

 

Summary:

So now whats ur options ? Well quite a lot – as you can i.e. setup glusterFS and replicate your nginx config files / or use consul – explore consul template and use that for nginx dynamic files …

If you have any interesting use case scenarios leave them in comments!

 

 

2

Logstash – Filtering Vyos syslog data

logstash-logoHey , So in last days/weeks 🙂 I work quite a lot with ELK stack. Especially in getting data from my systems into Elastic. There would not be any problem if not the fact that default parsing did not quite do work. But what would be IT life without challenges ?

So in this post I will explain in short how I have overcome this problem. And I’m sure you would be able to use this or event make it better.

We will look into following:

* Incoming raw data

* Creating filter

* Enjoying results

 

Incoming raw data:

So you got your vyos box doing the hard work on the edge of your network. And now you would like to have control when someone is knocking to your door or to find root cause when troubleshooting firewall rules.

Example of incoming data from my box looks similar to the following :

<4>Dec  6 01:36:00 myfwname kernel: [465183.670329] 
[internet_local-default-D]IN=eth2 OUT= 
MAC=00:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:00 
SRC=5.6.7.8 DST=1.2.3.4 LEN=64 TOS=0x00 
PREC=0x00 TTL=56 ID=10434 DF PROTO=TCP 
SPT=51790 DPT=80 WINDOW=65535 RES=0x00 SYN URGP=0

and if we just apply basic syslog filtering it will not give out all required fields. The problem we challenge here is we need to get out internet rule. Then following that we can see use case for key value filter.

 

Creating filter:

So time has come to use some magical skills of creating configuration for Logstash filter. I would like to put stress that using different approaches can have impact on performance. Both negative and positive. As yet I’m getting familiar with Logstash this might not be the best solution but I will definitely explore this.

You will notice that my filters use conditional statements so I do not process data unnecessary. In my case vyoss traffic is tagged as syslog and contains specific string in the message.

So without further bubbling … We begin with parsing out data from the message that will get for sure extracted.

  if [type] == "syslog" and [message] =~ "myfw" {
    grok {
      break_on_match => false
      match => [
      "message",  "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: \[(?<syslog_pid>.*?)\] %{GREEDYDATA:syslog_message}",
      "syslog_message", "\[(?<firewall_rule>.*?)\]"
      ]
    }

Points of interest here :

  • Grok filter DOES NOT break on match
  • We do match on message and further on extracted syslog_message to get our firewall rule from [ ]

Next we will do changes on fly to our fields using mutate

    # mutate our values
    mutate {
      add_field => [ "event_type", "firewall" ]
      rename => { "firewall_rule" => "[firewall][rule]" }
      gsub => [ "message", "= ", "=xxx" ]            # Here we remove scenario where this value is empty
    }

Points of interest :

  • I add a field event_type  called firewall so in future I would be able to quickly query for those events.
  • I rename my previous field ( firewall_rule ) to nested field
  • And lastly I use gsub to mitigate problem of missing values in key pair

Once this is done I extract remaining values using kv filter which is configured as follow :

    # Apply key value pair
    kv {
      include_keys => ["SRC","DST","PROTO","IN","MAC","SPT","DPT"]
      field_split => " \[\]"
      add_field => {
        "[firewall][source_address]" => "%{SRC}"
        "[firewall][destination_address]" => "%{DST}"
        "[firewall][protocol]" => "%{PROTO}"
        "[firewall][source_port]" => "%{SPT}"
        "[firewall][destination_port]" => "%{DPT}"
        "[firewall][interface_in]" => "%{IN}"
        "[firewall][mac_address]" => "%{MAC}"
      }
    }

Points of interest :

  • I use include_keys so only fields in array will be extracted ( positive impact on performance )
  • I tried field_split to help out with one of previous challenges but that did not make a lot of difference
  • And lastly I specify my new nested fields for extracted values

 

So thats it! The complete file looks following :

filter {
  if [type] == "syslog" and [message] =~ "vmfw" {
    grok {
      break_on_match => false
      match => [
      "message",  "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: \[(?<syslog_pid>.*?)\] %{GREEDYDATA:syslog_message}",
      "syslog_message", "\[(?<firewall_rule>.*?)\]"
      ]
    }
    
    # mutate our values
    mutate {
      add_field => [ "event_type", "firewall" ]
      rename => { "firewall_rule" => "[firewall][rule]" }
      gsub => [ "message", "OUT= MAC=", "MAC=" ]            # Here we remove scenario where this value is empty
    }

    # Apply key value pair
    kv {
      include_keys => ["SRC","DST","PROTO","IN","MAC","SPT","DPT"]
      field_split => " \[\]"
      add_field => {
        "[firewall][source_address]" => "%{SRC}"
        "[firewall][destination_address]" => "%{DST}"
        "[firewall][protocol]" => "%{PROTO}"
        "[firewall][source_port]" => "%{SPT}"
        "[firewall][destination_port]" => "%{DPT}"
        "[firewall][interface_in]" => "%{IN}"
        "[firewall][mac_address]" => "%{MAC}"
      }
    }
  }
}

 

Enjoying results:

We now would need to test this if it really works as we expect it to work. For this to check we of course will use Docker

First I will create the afore mentioned config file and name it logstash.conf . Once thats done we bring up container up with the following command :

docker run -d -p 25666:25666 -v "$PWD":/config-dir logstash logstash -f /config-dir/logstash.conf

This creates container for me which I can then test locally. Now for this to work you need input source ( i.e. tcp / udp and stdout i.e. codec ruby )

Then I will split my screen using tmux and will execute request while looking at results from docker logs

Logstash-vyos-local

 

And thats it! You have beautifully working parsing for your vyos box ! If you have any comments / or improvements – feel free to share!