2

Logstash – Filtering Vyos syslog data

logstash-logoHey , So in last days/weeks 🙂 I work quite a lot with ELK stack. Especially in getting data from my systems into Elastic. There would not be any problem if not the fact that default parsing did not quite do work. But what would be IT life without challenges ?

So in this post I will explain in short how I have overcome this problem. And I’m sure you would be able to use this or event make it better.

We will look into following:

* Incoming raw data

* Creating filter

* Enjoying results

 

Incoming raw data:

So you got your vyos box doing the hard work on the edge of your network. And now you would like to have control when someone is knocking to your door or to find root cause when troubleshooting firewall rules.

Example of incoming data from my box looks similar to the following :

<4>Dec  6 01:36:00 myfwname kernel: [465183.670329] 
[internet_local-default-D]IN=eth2 OUT= 
MAC=00:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:00 
SRC=5.6.7.8 DST=1.2.3.4 LEN=64 TOS=0x00 
PREC=0x00 TTL=56 ID=10434 DF PROTO=TCP 
SPT=51790 DPT=80 WINDOW=65535 RES=0x00 SYN URGP=0

and if we just apply basic syslog filtering it will not give out all required fields. The problem we challenge here is we need to get out internet rule. Then following that we can see use case for key value filter.

 

Creating filter:

So time has come to use some magical skills of creating configuration for Logstash filter. I would like to put stress that using different approaches can have impact on performance. Both negative and positive. As yet I’m getting familiar with Logstash this might not be the best solution but I will definitely explore this.

You will notice that my filters use conditional statements so I do not process data unnecessary. In my case vyoss traffic is tagged as syslog and contains specific string in the message.

So without further bubbling … We begin with parsing out data from the message that will get for sure extracted.

  if [type] == "syslog" and [message] =~ "myfw" {
    grok {
      break_on_match => false
      match => [
      "message",  "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: \[(?<syslog_pid>.*?)\] %{GREEDYDATA:syslog_message}",
      "syslog_message", "\[(?<firewall_rule>.*?)\]"
      ]
    }

Points of interest here :

  • Grok filter DOES NOT break on match
  • We do match on message and further on extracted syslog_message to get our firewall rule from [ ]

Next we will do changes on fly to our fields using mutate

    # mutate our values
    mutate {
      add_field => [ "event_type", "firewall" ]
      rename => { "firewall_rule" => "[firewall][rule]" }
      gsub => [ "message", "= ", "=xxx" ]            # Here we remove scenario where this value is empty
    }

Points of interest :

  • I add a field event_type  called firewall so in future I would be able to quickly query for those events.
  • I rename my previous field ( firewall_rule ) to nested field
  • And lastly I use gsub to mitigate problem of missing values in key pair

Once this is done I extract remaining values using kv filter which is configured as follow :

    # Apply key value pair
    kv {
      include_keys => ["SRC","DST","PROTO","IN","MAC","SPT","DPT"]
      field_split => " \[\]"
      add_field => {
        "[firewall][source_address]" => "%{SRC}"
        "[firewall][destination_address]" => "%{DST}"
        "[firewall][protocol]" => "%{PROTO}"
        "[firewall][source_port]" => "%{SPT}"
        "[firewall][destination_port]" => "%{DPT}"
        "[firewall][interface_in]" => "%{IN}"
        "[firewall][mac_address]" => "%{MAC}"
      }
    }

Points of interest :

  • I use include_keys so only fields in array will be extracted ( positive impact on performance )
  • I tried field_split to help out with one of previous challenges but that did not make a lot of difference
  • And lastly I specify my new nested fields for extracted values

 

So thats it! The complete file looks following :

filter {
  if [type] == "syslog" and [message] =~ "vmfw" {
    grok {
      break_on_match => false
      match => [
      "message",  "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: \[(?<syslog_pid>.*?)\] %{GREEDYDATA:syslog_message}",
      "syslog_message", "\[(?<firewall_rule>.*?)\]"
      ]
    }
    
    # mutate our values
    mutate {
      add_field => [ "event_type", "firewall" ]
      rename => { "firewall_rule" => "[firewall][rule]" }
      gsub => [ "message", "OUT= MAC=", "MAC=" ]            # Here we remove scenario where this value is empty
    }

    # Apply key value pair
    kv {
      include_keys => ["SRC","DST","PROTO","IN","MAC","SPT","DPT"]
      field_split => " \[\]"
      add_field => {
        "[firewall][source_address]" => "%{SRC}"
        "[firewall][destination_address]" => "%{DST}"
        "[firewall][protocol]" => "%{PROTO}"
        "[firewall][source_port]" => "%{SPT}"
        "[firewall][destination_port]" => "%{DPT}"
        "[firewall][interface_in]" => "%{IN}"
        "[firewall][mac_address]" => "%{MAC}"
      }
    }
  }
}

 

Enjoying results:

We now would need to test this if it really works as we expect it to work. For this to check we of course will use Docker

First I will create the afore mentioned config file and name it logstash.conf . Once thats done we bring up container up with the following command :

docker run -d -p 25666:25666 -v "$PWD":/config-dir logstash logstash -f /config-dir/logstash.conf

This creates container for me which I can then test locally. Now for this to work you need input source ( i.e. tcp / udp and stdout i.e. codec ruby )

Then I will split my screen using tmux and will execute request while looking at results from docker logs

Logstash-vyos-local

 

And thats it! You have beautifully working parsing for your vyos box ! If you have any comments / or improvements – feel free to share!

 

 

2

Docker compose and ELK – Automate the automated deployment


This post contains information which have been updated in post

docker ELK with compose v2

However to get idea of how solution works I recommend just reading through 🙂


Hello!

Its been a long time when it was a bit quiet here however there was a lot that I was busy with. And as you know its in majority of scenarios the time we are short on 🙂  Today I wanted to share with you update to one of my previous posts where we setup ELK in automated way

When I originally finished the post it of coourse ‘was working like a charm!” and then I just left it for a while and focused on couple of other projects. And recently I visited that page back as I wanted to quickly deploy ELK stack for my testing…. and then suprise – it does not work ?! Of course IT world is like a super speed train 🙂 and seems like I just stopped on a station and forgot to jump back there 🙂

So from my perspective it was a great opportunity to craft some extra bash skillks and refresh knowledge about ElasticSearch , Logstash and Kibana.

So what’s changed ?

First of all there is now one major script which gets the job done. the only thing you need to do is to specify a cluster name for elasticsearch.

Also I have added some folder existance checking so it doesnt come with dummy error msgs that folders do exist already.

How to run it now ?

Start by downloading script locally to folder under which we will create remaining folders for our components

curl -L http://git.io/vBPqC >> build_elk.sh

The -L option is there for purposes of followiing redirect (as thats what git.io is doing for us )

 

Once done you might need to change it to executable

sudo chmod +x build_elk.sh

 

And thats all 🙂 last thing to do is to execute the script with first argument being desired name of your elasticsearchcluster. Output is almost instant and promising 🙂

[email protected]~$ sudo ./build_elk.sh myelastico
Cloning into '/home/bar/compose/elk_stack'...
remote: Counting objects: 26, done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 26 (delta 7), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (26/26), done.
Checking connectivity... done.
Cloning into '/home/bar/logstash/central'...
remote: Counting objects: 17, done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 17 (delta 4), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (17/17), done.
Checking connectivity... done.
Cloning into '/home/bar/logstash/agent'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 1), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (8/8), done.
Checking connectivity... done.
Cloning into '/home/bar/elasticsearch/config'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (8/8), done.
Checking connectivity... done.
Creating redis-cache
Creating elasticsearch-central
Creating logstash-central
Creating kibana-frontend
Creating logstash-agent

 

Lets check docker deamon if our containers are indeed running …

[email protected]:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                NAMES
ff4f41753d6e        logstash:latest     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:25827->25827/udp, 0.0.0.0:25827->25827/tcp   logstash-agent
be97a16cdb1e        kibana:latest       "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:5601->5601/tcp                               kibana-frontend
d3535a6d9df8        logstash:latest     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:25826->25826/tcp, 0.0.0.0:25826->25826/udp   logstash-central
38b47ffbb3e7        elasticsearch:2     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp       elasticsearch-central
100227df0b50        redis:latest        "/entrypoint.sh redis"   2 minutes ago       Up 2 minutes        0.0.0.0:6379->6379/tcp                               redis-cache
[email protected]:~$

 

They all do 🙂 thats took less than second (altough I had the images already on my host … ) and if we just check browser ?

kibana_2

 

 

And if anything changes ? Well then this is all in git … 🙂 so just pull for changes and you will defenitely get the most up to date version. But maybe you have some suggstions or improvements ? Then just push them – I’m sure it would be beneficial 🙂

 

Below is the view on the gIst 🙂

0

Configure WSMan connectivity on vyos firewall

This is just a quick write up. When working with secure environements it might be necessary to open some firewall ports.  If by anychance you are looking how to do that on vyos firewall below you find details.

Enter configuration mode :

# Enter configuration mode
Config

Create new rule

# Port Group 
set firewall group port-group WSMan port '5985-5986'

# Set the rule
set firewall name some-name rule 666 action 'accept'
set firewall name some-name rule 666 description 'Allow for PowerShell remoting'
set firewall name some-name rule 666 destination group network-group AllNetworks
set firewall name some-name rule 666 destination group port-group 'WSMan'
set firewall name some-name rule 666 protocol 'tcp'
set firewall name some-name rule 666 source group address-group 'my-managemnt-servers'

 

Now above might require short explanation :

  • First we create a port group called WSMAN
  • Then we create rule 666 which will allow for powershell remoting
  • It will be allowed to network group defined in AllNetworks  ( defining it is beyond the scope of this short post so you can always find it in documentation http://vyos.net/wiki/User_Guide )
  • We specify we will allow port group defined earlier (so in our case WsMan ports )
  • Its type of TCP
  • and lastly we say that source of this will be my management servers defined as address group with name my-management-servers ( again 🙂 I will refer you to wiki how to create those )

One you are done with those you need to make sure configuration is commited. This is done by calling the following :

Commit

 

If there are no validation errors just save the config 🙂 so you are not suprised that after reboot it does not work 🙂

save

 

Enjoy securing your networks 🙂

0

PowerShell – Azure Resource Manager policies

Microsoft does not stop listening to people. Many of IT professionals is heavily using Azure Resource Manager and the natural course of action is to require better control over what can and what cannot be done.

Simple as it may sound Microsoft has now offered ARM policies.  You may find details from 23:22 min on video below

 

From the good side Microsoft has already prepared documentation for us which is waiting here.

Is it difficult ? I personally think it is not – altough there is no GUI but who from Engineers this days uses GUI 🙂 you have option to use either REST API or PowerShell cmdlets (communicating over that API 🙂 )

What polciies gives me control over ? It is build over the following principal :

{
  "if" : {
    <condition> | <logical operator>
  },
  "then" : {
    "effect" : "deny | audit"
  }
}

As you can see we define conditions and operators and based on that we take action like allow or deny.

 

At the moment I’m not droping any extra examples – as documentation have already couple of them – so you might to try them out as you read the details.

 

Happy automating 🙂

0

C# – Generate Entity Framework SQL script

This one is going to be really short one. As It happens that I need to enable others to recreate DBs for API models I create I usually deliver them SQL script that does the work. So how do you generate one in VS ?

Well using package manager you just call

Update-Database -Script -SourceMigration:0

 

And that creates you SQL script – of course without any seed data 🙂

4

PowerShell – Autodocument your modules/scripts using markdown

When writing your scripts or modules have you not wished that it would all autodocument itself ? Isnt this what we should be aiming for when creating automations ? 🙂 So automations would automate documenting themselfes ?

This is exactly what automation should be about and today I’m going to show you how I create automated documentation for extremly big modules in seconds. As mentioned before we will be using MarkDown  so it would be great if you would jump to here and get some more info if this is something new to you.

 

Prerequisites

In order for this to work you must have a good habit of documenting your functions. This is the key to the success. Example of such a function documentation using comment based approach can look as following :

function invoke-SomeMagic
{
      <#
        .SYNOPSIS
        Creates magical events

        .PARAMETER NumberOfPeople
        This paramter defines how many people are looking at your screen in the time of invoking the cmdlet

        .PARAMETER DifficultyImpression
        This parameter defines how difficult it looks what you are currently doing
        .DESCRIPTION
        This function executes magical events all around you. By defining parameters you have direct control of how difficult it will seems this is and how many people are watching will have direct influence on range of events.


        .EXAMPLE 
        invoke-SomeMagic -NumberOfPeople 1 -DifficultyImpression 10

        Creates really difficult looking magic for one person

        .EXAMPLE 
        invoke-SomeMagic -NumberOfPeople 100 -DifficultyImpression 10

        Creates a magical show
    #>


# Function doing something here :) ...........


}

 

Auto documenting script

Now what an automation would be without automating it 😀 ? Below is my implementation of autodocumenting to MarkDown.

 

What I really like here is the fact that it will generate temporary file during documentation (I discovered encoding gives problems with online PDF converter ) . The whole can be changed to suit your needs and layout requirements.

 

Convert it to PDF

The last stage would be converting it to PDF. At the moment I’m using http://www.markdowntopdf.com/ to convert file prepared by above script. And I must say that results are extremly satisfying.

 

Example

I have prepared small demo how it works in action. For this purposes I have created demo module with 3 dummy functions and then run the script. Below is snippet of how it looks. As mentioned before – I really like that and that kind of file can be nicely send to other engineer to quickly get the mfamiliar with your module.

 

markdown_autodocumentation

0

Powershell – Network cmdlets

In effort to move away from old school habits of using i.e. nslookup instead of PS cmdlets I thought it would be beneficial if for reference I would reblog quite interesting article about replacement of those cmd into pure PowerShell cmdlets. Original article you can find under here

IPCONFIG

Used to get ip configuration.

Get-NetIPConfiguration
Get-NetIPAddress

Get-NetIPConfiguration
Get-NetIPAddress | Sort InterfaceIndex | FT InterfaceIndex, InterfaceAlias, AddressFamily, IPAddress, PrefixLength -Autosize
Get-NetIPAddress | ? AddressFamily -eq IPv4 | FT –AutoSize
Get-NetAdapter Wi-Fi | Get-NetIPAddress | FT -AutoSize

 

PING

Check connectivity to target host.

Test-NetConnection

Test-NetConnection www.microsoft.com
Test-NetConnection -ComputerName www.microsoft.com -InformationLevel Detailed
Test-NetConnection -ComputerName www.microsoft.com | Select -ExpandProperty PingReplyDetails | FT Address, Status, RoundTripTime
1..10 | % { Test-NetConnection -ComputerName www.microsoft.com -RemotePort 80 } | FT -AutoSize

 

NSLOOKUP

Translate IP to name or vice versa

Resolve-DnsName

Resolve-DnsName www.microsoft.com
Resolve-DnsName microsoft.com -type SOA
Resolve-DnsName microsoft.com -Server 8.8.8.8 –Type A

 

ROUTE

Shows the IP routes (also can be used to add/remove )

Get-NetRoute
New-NetRoute
Remove-NetRoute

Get-NetRoute -Protocol Local -DestinationPrefix 192.168*
Get-NetAdapter Wi-Fi | Get-NetRoute

 

TRACERT

Trace route. Shows the IP route to a host, including all the hops between your computer and that host.

Test-NetConnection –TraceRoute

Test-NetConnection www.microsoft.com –TraceRoute
Test-NetConnection outlook.com -TraceRoute | Select -ExpandProperty TraceRoute | % { Resolve-DnsName $_ -type PTR -ErrorAction SilentlyContinue }

 

 

NETSTAT

Description: Shows current TCP/IP network connections.

Get-NetTCPConnection

Get-NetTCPConnection | Group State, RemotePort | Sort Count | FT Count, Name –Autosize
Get-NetTCPConnection | ? State -eq Established | FT –Autosize
Get-NetTCPConnection | ? State -eq Established | ? RemoteAddress -notlike 127* | % { $_; Resolve-DnsName $_.RemoteAddress -type PTR -ErrorAction SilentlyContinue }

 

 

So happy moving into objectirezed world of PowerShell 🙂

 

3

PowerShell – Active Directory changes synchronization with cookie

In today’s post I wanted to show you something that can be of interest for those who need to find recent Active Directory changes but are challenged by i.e. big AD forest with a large amount of object and are hitting performance problems when executing queries.  So where this problem comes from ? Well if you have Active Directory with a lot ( really a lot of objects ) then querying quite often for changes can be troublesome.

But dont worry – there are couple of ways to tackle this challenge. If you look for more details you will find that you can just query information (duh ?! ) / subscribe yourself to be notified when changes occur (push) / or make incremental queries (pull). And today we will exactly investigate querying using synchronization cookie

The principal here is to use cookie which will allow us to poll for changes since last time we queried AD. This way we can have only very specific query and return only subset of properties we are really interested with.

 

The whole code is quite simple to implement and consits of the following :

And that would be all for this. So from the code above you see that your subsequent requests would be based on changes since last poll (of course based on the query your provided ). In one of next posts we will focus on getting this in C# as some of you may want to do more DevOps

 

 

 

 

 

0

OpenSSL – generate self signed certificate

Quite often to test different aspects of IT or security we use certificates which are self signed. As the name implies we are responsible for generating them. In this post we will go through short explanation how to generate one with use of openssl.

To create one we will issue the following command :

openssl req -x509 -newkey rsa:2048 -keyout certificate-key.pem -out certificate.pem -days 365

In order to better understand above command lets break it down :

  • req : PKCS#10 certificate request and certificate generating utility
  • -x509 : we receive self signed certificate as output instead of certificate request
  • -newkey rsa:#### : creates a new certificate request and new private key. In this instance we use RSA with size of #### bits
  • -keyout file.name : outputs just created private key into a file
  • -out file.name : specifies output file name
  • -days # : specifies how many days the certificate will be valid when x509 option have been used. Default value for this setting is 30 days
  • -nodes : indicates that private key should not be encrypted

 

For those being on windows we sometimes need to get PFX (which contains private and public key ). Easiest is to use OpenSSL in the following form :

openssl pkcs12 -inkey bob_key.pem -in bob_cert.cert -export -out bob_pfx.pfx

 

Since some of you will be working on windows you might get across the following error :

WARNING: can't open config file: /usr/local/ssl/openssl.cnf

then what you are missing is setting for a environmental variable (*make sure to adjust path to your cfg file ):

set OPENSSL_CONF=c:\OpenSSL-Win32\bin\openssl.cfg

 

And thats it for self signed certificate. In next post we will use knowledge of certificates with the power of Docker and will set up our own registry

 

2

SSL file standards explained

While browsing net I cam across interesting post on serverfault and I thought it would be nice to have it as a point of reference , especially when working with certificates

Below you may find the most popular standards :

  • .csr This is a Certificate Signing Request. Some applications can generate these for submission to certificate-authorities. The actual format is PKCS10 which is defined in RFC 2986. It includes some/all of the key details of the requested certificate such as subject, organization, state, whatnot, as well as the public key of the certificate to get signed. These get signed by the CA and a certificate is returned. The returned certificate is the public certificate (not the key), which itself can be in a couple of formats.
  • .pem Defined in RFC’s 1421 through 1424, this is a container format that may include just the public certificate (such as with Apache installs, and CA certificate files /etc/ssl/certs), or may include an entire certificate chain including public key, private key, and root certificates. The name is from Privacy Enhanced Mail (PEM), a failed method for secure email but the container format it used lives on, and is a base64 translation of the x509 ASN.1 keys.
  • .key This is a PEM formatted file containing just the private-key of a specific certificate and is merely a conventional name and not a standardized one. In Apache installs, this frequently resides in /etc/ssl/private. The rights on these files are very important, and some programs will refuse to load these certificates if they are set wrong.
  • .pkcs12 .pfx .p12 Originally defined by RSA in the Public-Key Cryptography Standards, the “12” variant was enhanced by Microsoft. This is a passworded container format that contains both public and private certificate pairs. Unlike .pem files, this container is fully encrypted. Openssl can turn this into a .pem file with both public and private keys: openssl pkcs12 -in file-to-convert.p12 -out converted-file.pem -nodes

A few other formats that show up from time to time:

  • .der A way to encode ASN.1 syntax in binary, a .pem file is just a Base64 encoded .der file. OpenSSL can convert these to .pem (openssl x509 -inform der -in to-convert.der -out converted.pem). Windows sees these as Certificate files. By default, Windows will export certificates as .DER formatted files with a different extension. Like…
  • .cert .cer .crt A .pem (or rarely .der) formatted file with a different extension, one that is recognized by Windows Explorer as a certificate, which .pem is not.
  • .p7b Defined in RFC 2315, this is a format used by windows for certificate interchange. Java understands these natively. Unlike .pem style certificates, this format has a defined way to include certification-path certificates.
  • .crl A certificate revocation list. Certificate Authorities produce these as a way to de-authorize certificates before expiration. You can sometimes download them from CA websites.

In summary, there are four different ways to present certificates and their components:

  • PEM Governed by RFCs, it’s used preferentially by open-source software. It can have a variety of extensions (.pem, .key, .cer, .cert, more)
  • PKCS7 An open standard used by Java and supported by Windows. Does not contain private key material.
  • PKCS12 A private standard that provides enhanced security versus the plain-text PEM format. This can contain private key material. It’s used preferentially by Windows systems, and can be freely converted to PEM format through use of openssl.
  • DER The parent format of PEM. It’s useful to think of it as a binary version of the base64-encoded PEM file. Not routinely used by much outside of Windows.