3

IP address management with PHPipam and docker

Recently I have came across need of having IP address management tool. So I have looked at several options and decided that the best one was phpIPAM for 2 main reasons:

  • API
  • Overlapping subnets

Also the look and feel gives me positive feeling about the product and the fact there is a lot of development being done on github.

phpipam_mainconsole

Using docker to run the application

So I have decided to prepare fully operational docker solution to support this application. Learned on previous mistakes github repository to which I will be referring you to have been accordingly tagged so if any changes occur – you will always be able to follow this post directions.

 

RafPe_docker-phpipam__phpIPam_-_IP_address_management_in_Docker_container_?

 

I would like to avoid duplication of information. Therefore I will just highlight one of possible installation options as rest is mentioned on docker hub and on github.

 

We start of with cloning our repository

git clone https://github.com/RafPe/docker-phpipam.git

 

Once thats done  we can checkout specific tag ( tag associated with content of this post )

git checkout -t v1.0.1

 

and then we have all components needed to run the last command

docker-compose up -d

which in return gives the following output

phpipam_compose_running

 

And off you go ahead with testing. Here couple of points are worth of mentioning:

  • For production run use database backend which has persistent storage – as in this form DB has no persistent storage
  • Consider using SSL

 

Application has a lot of PROs and in my opinion is really worth of looking into if your management tools needs some automation!

 

6

HAproxy – Logging within docker container

Hey ,

So today we will continue looking at HAproxy – however this time we will be using Docker to host our load balancer. As far as it is no problem to just download the main image from docker hub and run instantly it does not give out of the box something that I was after …… the logs.

That’s why I went ahead and just created my own version of HAproxy which includes rsyslog. Repository with the image can be found on github.

In order to run the container we just need to execute the following commands:

  1. To get the most up to date image from my docker hub repo
    docker pull rafpe/docker-haproxy-rsyslog
  2. To start container ( assuming you have config file in current directory )
    docker run -it -d -P -v ${PWD}/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg rafpe/docker-haproxy-rsyslog

 

Once you do this container should be up and running and if you query for current docker containers you should see something similar to output below :

haproxy_rsyslog_logs

 

As you can see we are getting logs directly visible after querying with docker logs command.

In one of the future posts we will be investigating logs format customisations as well as features included in HAproxy since 1.6  which is log tags.

 

If you would have any problems configuring this because of missing config you can use sample below

global
    log 127.0.0.1 local2
    maxconn 2000
    pidfile /var/run/haproxy.pid

    tune.ssl.default-dh-param 2048

    # SSL ciphers
    ssl-default-bind-options no-sslv3 no-tls-tickets
    ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA



defaults
    mode    http
    option  httplog
    option  dontlognull
    option  forwardfor
    option  contstats
    option  http-server-close
    option log-health-checks
    retries 3
    option  redispatch
    timeout connect  5000
    timeout client  10000
    timeout server  10000

    # make sure log-format is on a single line
    log global
    log-format {"type":"haproxy","timestamp":%Ts,"http_status":%ST,"http_request":"%r","remote_addr":"%ci","bytes_read":%B,"upstream_addr":"%si","backend_name":"%b","retries":%rc,"bytes_uploaded":%U,"upstream_response_time":"%Tr","upstream_connect_time":"%Tc","session_duration":"%Tt","termination_state":"%ts"}


frontend http-in
    bind *:80
    
    # Default backend to be used
    default_backend will-be-back-soon


backend will-be-back-soon
   balance roundrobin

 

11

Docker compose v2 – using static network addresses

Docker compose is a really great piece of code 🙂 that will allow you to build better orchestration with your containers. Recent breaking releases introduced a lot of features. While looking at some of them I was wondering about situations in which you build more (or a bit less ) complex containers based environment and do not have service discovery. In some instances you would just like to have static IP addresses.

Now this is perfectly easy to be done when running containers with cli … but how do you do that with compose ? After looking at the documentation I managed to come out with the following

And this is allowing me to specify static IP addresses for my containers using the compose file. For reference you can find the snippet of full file below

version: '2'

services:
  haproxy:
       image: haproxy:latest
       ports:
          - "80:80"
          - "443:443"
       volumes:
          - ${PWD}/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
       restart: always
       networks:
          - widgets
       logging:
        driver: json-file
        options:
          max-size: "100m"
          max-file: "3"
          labels: "haproxy"

  mariadb:
       image: mariadb:latest
       volumes:
          - /vol/appdata/mariadb:/var/lib/mysql
       environment:
          - MYSQL_ROOT_PASSWORD=secret-pw
       restart: always
       networks:
          - widgets
       logging:
         driver: json-file
         options:
           max-size: "100m"
           max-file: "3"
           labels: "mariadb"

  app_orangella:
       image: apache:1.0
       restart: always
       ports:
          - "81:80"
       networks:
          - widgets
       logging:
         driver: json-file
         options:
           max-size: "50m"
           max-file: "3"
           labels: "app_orangella"

networks:
  widgets:
    driver: bridge
    ipam:
     config:
       - subnet: 172.10.0.0/16
         gateway: 172.10.5.254
         aux_addresses:
          haproxy: 172.10.1.2
          mariadb: 172.10.1.3
          app_orangella: 172.10.1.4

 

hope this will get you rolling with Docker compose 🙂

 

1

Docker compose: error while loading shared libraries libz.so.1

I recently got very annoying error on freshly installed CentOS 7 machine when trying to use most up to date docker-compose ( at the moment of writing 1.6.2 ).

Error stated the following when trying to execute compose file :

docker-compose up -d
docker-compose: error while loading shared libraries: libz.so.1: failed to map segment from shared object: Operation not permitted

 

So temporarly I decided to disable SElinux however this has not helped and logs were not helpfull as well in this instance. So after a bit of wondering around on internet I came across this github issue and I tried one of the workarounds which worked in my instance.

Solution was to remount tmp with exec permission by executing :

sudo mount /tmp -o remount,exec

 

1

Docker – ELK with compose v2


This post contains information which are based on the following entry:

Docker compose and ELK – Automate the automated deployment

To get idea how much has changed it’s worth of checking that out 🙂


docker_pack

If you are working with Docker then for sure you are for non stop challenging and interesting times. And since Docker is so actively developed you cannot just make a solution and ‘forget about it’ – you would just miss so much of innovation.

So since I created my ELK stack previously with Docker compose I decided that it is finally good time to move it to the compose v2!

 

 

If you have not heard about breaking changes then there is a quite nice blog post on docker blog where you can get all info that will start you going. To avoid looking all over internet here is the link 

So once you get idea how cool things now can be done we can get things going. We will start off by getting files from Github repository. This time it differs a bit from the previous posts – as then you could get a version of repo which did not have a stable version or just refused to work for some whatever reason. I have used tags on specific version which allows you to get to a specific version of code – in a nutshell it will work 😀

so let’s get to it 😀

git clone https://github.com/RafPe/docker-elk-stack.git
git checkout tags/v2.0.0

Once you have this you can just start it off by typing

docker-compose up -d

This will commence creating containers which gives the following output:

Screenshot 2016-03-07 22.59.48

 

Let’s see if we have all containers running correctly by checking logs :

docker-compose logs

You probably will get similar output as the following:

Screenshot 2016-03-07 23.01.59

 

And thats basically how you would go about creating the stack with default setup – but if you would like to tweak some settings you can check out the following:

Logging:

limited the logging drivers file size and roll over by using the following parts of compose file

logging:
driver: json-file
options:
max-size: "50m"
max-file: "3"
labels: "kibana"

 

Elasticsearch data persistence:

As for most of development tasks I do not use persistent data if you would like to have this for Elasticsearch cluster you will have to change the following line in compose file by specyfing where to store the data

volumes:
# - ${PWD}/elasticsearch/data:/usr/share/elasticsearch/data

 

Logstash configuration:

By default logstash will use demo-logstash.conf which is configured just with beats input and some filtering applied. Once completed data will be sent to elasticsearch. There are more logstash ready config files under ./logstash folder so feel free to explore and possibly use.

 

 

If you would have any comments – leave them behind as I’m interested on your approach as well 😀

 

0

Docker Owncloud container with LDAP support

Since we already talked about using Azure Files for storage and for docker why not make your own storage use of it – something like ‘Dropbox’ :)

Product I’m referring here is called owncloud and I wont be spending time to tell you what PRO/CONs you have. In my case I wanted to use it to share files with friends and family so I decided to set up own container.

Choosen for Docker and there was a suprise :)  When using the default image there was no LDAP support. So I have just added it to a docker file and created a gist of it.

 

 


 

 

 

 

2

Logstash – Filtering Vyos syslog data

logstash-logoHey , So in last days/weeks 🙂 I work quite a lot with ELK stack. Especially in getting data from my systems into Elastic. There would not be any problem if not the fact that default parsing did not quite do work. But what would be IT life without challenges ?

So in this post I will explain in short how I have overcome this problem. And I’m sure you would be able to use this or event make it better.

We will look into following:

* Incoming raw data

* Creating filter

* Enjoying results

 

Incoming raw data:

So you got your vyos box doing the hard work on the edge of your network. And now you would like to have control when someone is knocking to your door or to find root cause when troubleshooting firewall rules.

Example of incoming data from my box looks similar to the following :

<4>Dec  6 01:36:00 myfwname kernel: [465183.670329] 
[internet_local-default-D]IN=eth2 OUT= 
MAC=00:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:00 
SRC=5.6.7.8 DST=1.2.3.4 LEN=64 TOS=0x00 
PREC=0x00 TTL=56 ID=10434 DF PROTO=TCP 
SPT=51790 DPT=80 WINDOW=65535 RES=0x00 SYN URGP=0

and if we just apply basic syslog filtering it will not give out all required fields. The problem we challenge here is we need to get out internet rule. Then following that we can see use case for key value filter.

 

Creating filter:

So time has come to use some magical skills of creating configuration for Logstash filter. I would like to put stress that using different approaches can have impact on performance. Both negative and positive. As yet I’m getting familiar with Logstash this might not be the best solution but I will definitely explore this.

You will notice that my filters use conditional statements so I do not process data unnecessary. In my case vyoss traffic is tagged as syslog and contains specific string in the message.

So without further bubbling … We begin with parsing out data from the message that will get for sure extracted.

  if [type] == "syslog" and [message] =~ "myfw" {
    grok {
      break_on_match => false
      match => [
      "message",  "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: \[(?<syslog_pid>.*?)\] %{GREEDYDATA:syslog_message}",
      "syslog_message", "\[(?<firewall_rule>.*?)\]"
      ]
    }

Points of interest here :

  • Grok filter DOES NOT break on match
  • We do match on message and further on extracted syslog_message to get our firewall rule from [ ]

Next we will do changes on fly to our fields using mutate

    # mutate our values
    mutate {
      add_field => [ "event_type", "firewall" ]
      rename => { "firewall_rule" => "[firewall][rule]" }
      gsub => [ "message", "= ", "=xxx" ]            # Here we remove scenario where this value is empty
    }

Points of interest :

  • I add a field event_type  called firewall so in future I would be able to quickly query for those events.
  • I rename my previous field ( firewall_rule ) to nested field
  • And lastly I use gsub to mitigate problem of missing values in key pair

Once this is done I extract remaining values using kv filter which is configured as follow :

    # Apply key value pair
    kv {
      include_keys => ["SRC","DST","PROTO","IN","MAC","SPT","DPT"]
      field_split => " \[\]"
      add_field => {
        "[firewall][source_address]" => "%{SRC}"
        "[firewall][destination_address]" => "%{DST}"
        "[firewall][protocol]" => "%{PROTO}"
        "[firewall][source_port]" => "%{SPT}"
        "[firewall][destination_port]" => "%{DPT}"
        "[firewall][interface_in]" => "%{IN}"
        "[firewall][mac_address]" => "%{MAC}"
      }
    }

Points of interest :

  • I use include_keys so only fields in array will be extracted ( positive impact on performance )
  • I tried field_split to help out with one of previous challenges but that did not make a lot of difference
  • And lastly I specify my new nested fields for extracted values

 

So thats it! The complete file looks following :

filter {
  if [type] == "syslog" and [message] =~ "vmfw" {
    grok {
      break_on_match => false
      match => [
      "message",  "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: \[(?<syslog_pid>.*?)\] %{GREEDYDATA:syslog_message}",
      "syslog_message", "\[(?<firewall_rule>.*?)\]"
      ]
    }
    
    # mutate our values
    mutate {
      add_field => [ "event_type", "firewall" ]
      rename => { "firewall_rule" => "[firewall][rule]" }
      gsub => [ "message", "OUT= MAC=", "MAC=" ]            # Here we remove scenario where this value is empty
    }

    # Apply key value pair
    kv {
      include_keys => ["SRC","DST","PROTO","IN","MAC","SPT","DPT"]
      field_split => " \[\]"
      add_field => {
        "[firewall][source_address]" => "%{SRC}"
        "[firewall][destination_address]" => "%{DST}"
        "[firewall][protocol]" => "%{PROTO}"
        "[firewall][source_port]" => "%{SPT}"
        "[firewall][destination_port]" => "%{DPT}"
        "[firewall][interface_in]" => "%{IN}"
        "[firewall][mac_address]" => "%{MAC}"
      }
    }
  }
}

 

Enjoying results:

We now would need to test this if it really works as we expect it to work. For this to check we of course will use Docker

First I will create the afore mentioned config file and name it logstash.conf . Once thats done we bring up container up with the following command :

docker run -d -p 25666:25666 -v "$PWD":/config-dir logstash logstash -f /config-dir/logstash.conf

This creates container for me which I can then test locally. Now for this to work you need input source ( i.e. tcp / udp and stdout i.e. codec ruby )

Then I will split my screen using tmux and will execute request while looking at results from docker logs

Logstash-vyos-local

 

And thats it! You have beautifully working parsing for your vyos box ! If you have any comments / or improvements – feel free to share!

 

 

2

Docker compose and ELK – Automate the automated deployment


This post contains information which have been updated in post

docker ELK with compose v2

However to get idea of how solution works I recommend just reading through 🙂


Hello!

Its been a long time when it was a bit quiet here however there was a lot that I was busy with. And as you know its in majority of scenarios the time we are short on 🙂  Today I wanted to share with you update to one of my previous posts where we setup ELK in automated way

When I originally finished the post it of coourse ‘was working like a charm!” and then I just left it for a while and focused on couple of other projects. And recently I visited that page back as I wanted to quickly deploy ELK stack for my testing…. and then suprise – it does not work ?! Of course IT world is like a super speed train 🙂 and seems like I just stopped on a station and forgot to jump back there 🙂

So from my perspective it was a great opportunity to craft some extra bash skillks and refresh knowledge about ElasticSearch , Logstash and Kibana.

So what’s changed ?

First of all there is now one major script which gets the job done. the only thing you need to do is to specify a cluster name for elasticsearch.

Also I have added some folder existance checking so it doesnt come with dummy error msgs that folders do exist already.

How to run it now ?

Start by downloading script locally to folder under which we will create remaining folders for our components

curl -L http://git.io/vBPqC >> build_elk.sh

The -L option is there for purposes of followiing redirect (as thats what git.io is doing for us )

 

Once done you might need to change it to executable

sudo chmod +x build_elk.sh

 

And thats all 🙂 last thing to do is to execute the script with first argument being desired name of your elasticsearchcluster. Output is almost instant and promising 🙂

[email protected]~$ sudo ./build_elk.sh myelastico
Cloning into '/home/bar/compose/elk_stack'...
remote: Counting objects: 26, done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 26 (delta 7), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (26/26), done.
Checking connectivity... done.
Cloning into '/home/bar/logstash/central'...
remote: Counting objects: 17, done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 17 (delta 4), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (17/17), done.
Checking connectivity... done.
Cloning into '/home/bar/logstash/agent'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 1), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (8/8), done.
Checking connectivity... done.
Cloning into '/home/bar/elasticsearch/config'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (8/8), done.
Checking connectivity... done.
Creating redis-cache
Creating elasticsearch-central
Creating logstash-central
Creating kibana-frontend
Creating logstash-agent

 

Lets check docker deamon if our containers are indeed running …

[email protected]:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                NAMES
ff4f41753d6e        logstash:latest     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:25827->25827/udp, 0.0.0.0:25827->25827/tcp   logstash-agent
be97a16cdb1e        kibana:latest       "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:5601->5601/tcp                               kibana-frontend
d3535a6d9df8        logstash:latest     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:25826->25826/tcp, 0.0.0.0:25826->25826/udp   logstash-central
38b47ffbb3e7        elasticsearch:2     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp       elasticsearch-central
100227df0b50        redis:latest        "/entrypoint.sh redis"   2 minutes ago       Up 2 minutes        0.0.0.0:6379->6379/tcp                               redis-cache
[email protected]:~$

 

They all do 🙂 thats took less than second (altough I had the images already on my host … ) and if we just check browser ?

kibana_2

 

 

And if anything changes ? Well then this is all in git … 🙂 so just pull for changes and you will defenitely get the most up to date version. But maybe you have some suggstions or improvements ? Then just push them – I’m sure it would be beneficial 🙂

 

Below is the view on the gIst 🙂

1

Azure Files on Ubuntu

If you have not seen recent post on Azure blog , then I would like to let you know that Azure Files are now GA. Details of this blog entry are available here.

Since I would not like to make duplicate of content I’m going to show you how you can get the Azure File share mapped on your linux boxes. Why linux boxes ? I already have tryzylion ideas of usage for this – major one is Docker and containers which I would like to make HA or my own Docker repository.

 

Creation of files via portal is extremly easy and intuitive

azure_files_firstview

 

Install tools

We need to install the following package if not alredy present ( I become a fan of ubuntu 🙂 :

sudo apt-get install cifs-utils

 

Mount fileshare

Then next step is mounting the share. This has some limitations based on SMB protocol version being used (for more detailed info look into the mentioned azure blog post link ) .  I will be using in this instance SMB v3 so we are good to go on using AF on premises.

sudo mount -t cifs //rafpeninja.file.core.windows.net/docker-demo-data ./dockerdemodata -o vers=3.0,username=rafpeninja,password=YourAwesomeStorageKey==,dir_mode=0777,file_mode=0777

 

As I did not want to play yet with any restrictions the permissions are kind high 🙂 but you can modify them as you need

 

Simple test

Once this is done you can head to the folder and create a sample file.

sudo touch test.me

 

When done you can see that file instantly via the portal

azure_file_test_ok

 

 

And here you go – your file is immiediately available. If you got any scenarios where you already use those I’m keen to hear about it !

 

 

 

5

Docker compose and ELK – setup in automated way

docker-compose-logo-01Altough originally this was supposed to be short post about setting up ELK stack for logging. However with every moment I have been working with this technology it got me really ‘insipired’ and  I thought it would be worth to start and make it working the right way from the very beggining

 

Now since we are up for automating things we wil try to make use of docker compose which will allow us to setup whole stack in automated way. Docker compose is detailed in here

Compose in short allows you to describe how your services will look like and how do they interact with each other (volumes/ports/links).

In this post we will be using docker + docker-compose on Ubuntu host running in Azure. If you would be wondering why I just show my IP addresses all the time on the screenshots … because those are not load balanced static IP addresses. So every time I spin a host I get a new one 🙂

 


This post contains information which have been updated in post

Docker compose and ELK – Automate the automated deployment

However for gettign idea of how solution works I recommend just reading through 🙂


 

 

Installing Docker-compose

So the first thing we need to do is to install docker-compose. Since as we all now docker is under constant development it is easiest to give you link to gitHub release page rather than direct link which can be out of date

Once installed you can use the following command to make sure it is installed :

docker-compose --version

 

Preparing folder structure

Since we will be using config files and storing elasticsearch data on the host we will need to setup folder structure. I’m aware that this can be done better with variables 🙂 but ubuntu is still learning curve so I will leave it up to you to find better ways 🙂 In the meantime let’s run the following command

sudo mkdir -p /cDocker/elasticsearch/data
sudo mkdir -p /cDocker/logstash/conf
sudo mkdir -p /cDocker/logstash/agent
sudo mkdir -p /cDocker/logstash/central
sudo mkdir -p /cDocker/compose/elk_stack

 

Clone configuration files

Once you have the folder structure we will prepare our config files. To do this we will be cloning gitHub repository (gists ) which I have prepared in advance (and tested as well of course ) .

git clone https://gist.github.com/60c3d7ff1b383e34990a.git /cDocker/compose/elk_stack

git clone https://gist.github.com/6627a2bf05ff956a28a9.git /cDocker/logstash/central/

git clone https://gist.github.com/0cd6594672ebfe1205a5.git /cDocker/logstash/agent/

git clone https://gist.github.com/c897a35f955c9b1aa052.git /cDocker/elasticsearch/data/

 

Since I keep a bit different names on github (this might be subject to change in future ) we need to rename them a bit 🙂 For this you can run following commands

mv /cDocker/compose/elk_stack/docker-compose_elk_with_redis.yml  /cDocker/compose/elk_stack/docker-compose.yml

mv /cDocker/elasticsearch/data/elasticsearch_sample_conf.yml /cDocker/elasticsearch/data/elasticsearch.yml

mv /cDocker/logstash/agent/logstash_config_agent_with_redis.conf /cDocker/logstash/conf/agent.conf

mv /cDocker/logstash/central/logstash_config_central.conf /cDocker/logstash/conf/central.conf

 

Docker compose file

If you look at the code file below you will notice that we define how our image will be build. What ports will be epxosed , what links will be created amongst containers. Thanks to that machines will be created in specific order and linked accordingly, And since we have already prepared configuration files the whole stack will be ready to go.

 

Execute orchestration

Now we have everything in place to set up our first run of orechestration. Our next step is just navigating to compose folder (where our docker-compose file is ) and running following command :

/cDocker/compose/elk_stack#: docker-compose up -d

This will execute pulling of all layers and in creating of services afterwards. Once completed you should see something similar to the following :

docker_compose_elk_stack_ready_01

 

 

Summary

Well and thats it folks! We of course have much more potential to do much more (using variables / labels etc ) however we will do more funky stuff in next posts. Since Azure Files is finally in production we will use it as persistent storage in one of our future posts so stay tuned.

On subject of ready to use ELK stack we will be looking into managing input based on logstash plugins and we will see on our own eyes how this Docker ELK stack will empower our IoT automations!