1

Docker – ELK with compose v2


This post contains information which are based on the following entry:

Docker compose and ELK – Automate the automated deployment

To get idea how much has changed it’s worth of checking that out 🙂


docker_pack

If you are working with Docker then for sure you are for non stop challenging and interesting times. And since Docker is so actively developed you cannot just make a solution and ‘forget about it’ – you would just miss so much of innovation.

So since I created my ELK stack previously with Docker compose I decided that it is finally good time to move it to the compose v2!

 

 

If you have not heard about breaking changes then there is a quite nice blog post on docker blog where you can get all info that will start you going. To avoid looking all over internet here is the link 

So once you get idea how cool things now can be done we can get things going. We will start off by getting files from Github repository. This time it differs a bit from the previous posts – as then you could get a version of repo which did not have a stable version or just refused to work for some whatever reason. I have used tags on specific version which allows you to get to a specific version of code – in a nutshell it will work 😀

so let’s get to it 😀

git clone https://github.com/RafPe/docker-elk-stack.git
git checkout tags/v2.0.0

Once you have this you can just start it off by typing

docker-compose up -d

This will commence creating containers which gives the following output:

Screenshot 2016-03-07 22.59.48

 

Let’s see if we have all containers running correctly by checking logs :

docker-compose logs

You probably will get similar output as the following:

Screenshot 2016-03-07 23.01.59

 

And thats basically how you would go about creating the stack with default setup – but if you would like to tweak some settings you can check out the following:

Logging:

limited the logging drivers file size and roll over by using the following parts of compose file

logging:
driver: json-file
options:
max-size: "50m"
max-file: "3"
labels: "kibana"

 

Elasticsearch data persistence:

As for most of development tasks I do not use persistent data if you would like to have this for Elasticsearch cluster you will have to change the following line in compose file by specyfing where to store the data

volumes:
# - ${PWD}/elasticsearch/data:/usr/share/elasticsearch/data

 

Logstash configuration:

By default logstash will use demo-logstash.conf which is configured just with beats input and some filtering applied. Once completed data will be sent to elasticsearch. There are more logstash ready config files under ./logstash folder so feel free to explore and possibly use.

 

 

If you would have any comments – leave them behind as I’m interested on your approach as well 😀

 

2

Docker compose and ELK – Automate the automated deployment


This post contains information which have been updated in post

docker ELK with compose v2

However to get idea of how solution works I recommend just reading through 🙂


Hello!

Its been a long time when it was a bit quiet here however there was a lot that I was busy with. And as you know its in majority of scenarios the time we are short on 🙂  Today I wanted to share with you update to one of my previous posts where we setup ELK in automated way

When I originally finished the post it of coourse ‘was working like a charm!” and then I just left it for a while and focused on couple of other projects. And recently I visited that page back as I wanted to quickly deploy ELK stack for my testing…. and then suprise – it does not work ?! Of course IT world is like a super speed train 🙂 and seems like I just stopped on a station and forgot to jump back there 🙂

So from my perspective it was a great opportunity to craft some extra bash skillks and refresh knowledge about ElasticSearch , Logstash and Kibana.

So what’s changed ?

First of all there is now one major script which gets the job done. the only thing you need to do is to specify a cluster name for elasticsearch.

Also I have added some folder existance checking so it doesnt come with dummy error msgs that folders do exist already.

How to run it now ?

Start by downloading script locally to folder under which we will create remaining folders for our components

curl -L http://git.io/vBPqC >> build_elk.sh

The -L option is there for purposes of followiing redirect (as thats what git.io is doing for us )

 

Once done you might need to change it to executable

sudo chmod +x build_elk.sh

 

And thats all 🙂 last thing to do is to execute the script with first argument being desired name of your elasticsearchcluster. Output is almost instant and promising 🙂

[email protected]~$ sudo ./build_elk.sh myelastico
Cloning into '/home/bar/compose/elk_stack'...
remote: Counting objects: 26, done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 26 (delta 7), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (26/26), done.
Checking connectivity... done.
Cloning into '/home/bar/logstash/central'...
remote: Counting objects: 17, done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 17 (delta 4), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (17/17), done.
Checking connectivity... done.
Cloning into '/home/bar/logstash/agent'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 1), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (8/8), done.
Checking connectivity... done.
Cloning into '/home/bar/elasticsearch/config'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (8/8), done.
Checking connectivity... done.
Creating redis-cache
Creating elasticsearch-central
Creating logstash-central
Creating kibana-frontend
Creating logstash-agent

 

Lets check docker deamon if our containers are indeed running …

[email protected]:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                NAMES
ff4f41753d6e        logstash:latest     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:25827->25827/udp, 0.0.0.0:25827->25827/tcp   logstash-agent
be97a16cdb1e        kibana:latest       "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:5601->5601/tcp                               kibana-frontend
d3535a6d9df8        logstash:latest     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:25826->25826/tcp, 0.0.0.0:25826->25826/udp   logstash-central
38b47ffbb3e7        elasticsearch:2     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp       elasticsearch-central
100227df0b50        redis:latest        "/entrypoint.sh redis"   2 minutes ago       Up 2 minutes        0.0.0.0:6379->6379/tcp                               redis-cache
[email protected]:~$

 

They all do 🙂 thats took less than second (altough I had the images already on my host … ) and if we just check browser ?

kibana_2

 

 

And if anything changes ? Well then this is all in git … 🙂 so just pull for changes and you will defenitely get the most up to date version. But maybe you have some suggstions or improvements ? Then just push them – I’m sure it would be beneficial 🙂

 

Below is the view on the gIst 🙂

5

Docker compose and ELK – setup in automated way

docker-compose-logo-01Altough originally this was supposed to be short post about setting up ELK stack for logging. However with every moment I have been working with this technology it got me really ‘insipired’ and  I thought it would be worth to start and make it working the right way from the very beggining

 

Now since we are up for automating things we wil try to make use of docker compose which will allow us to setup whole stack in automated way. Docker compose is detailed in here

Compose in short allows you to describe how your services will look like and how do they interact with each other (volumes/ports/links).

In this post we will be using docker + docker-compose on Ubuntu host running in Azure. If you would be wondering why I just show my IP addresses all the time on the screenshots … because those are not load balanced static IP addresses. So every time I spin a host I get a new one 🙂

 


This post contains information which have been updated in post

Docker compose and ELK – Automate the automated deployment

However for gettign idea of how solution works I recommend just reading through 🙂


 

 

Installing Docker-compose

So the first thing we need to do is to install docker-compose. Since as we all now docker is under constant development it is easiest to give you link to gitHub release page rather than direct link which can be out of date

Once installed you can use the following command to make sure it is installed :

docker-compose --version

 

Preparing folder structure

Since we will be using config files and storing elasticsearch data on the host we will need to setup folder structure. I’m aware that this can be done better with variables 🙂 but ubuntu is still learning curve so I will leave it up to you to find better ways 🙂 In the meantime let’s run the following command

sudo mkdir -p /cDocker/elasticsearch/data
sudo mkdir -p /cDocker/logstash/conf
sudo mkdir -p /cDocker/logstash/agent
sudo mkdir -p /cDocker/logstash/central
sudo mkdir -p /cDocker/compose/elk_stack

 

Clone configuration files

Once you have the folder structure we will prepare our config files. To do this we will be cloning gitHub repository (gists ) which I have prepared in advance (and tested as well of course ) .

git clone https://gist.github.com/60c3d7ff1b383e34990a.git /cDocker/compose/elk_stack

git clone https://gist.github.com/6627a2bf05ff956a28a9.git /cDocker/logstash/central/

git clone https://gist.github.com/0cd6594672ebfe1205a5.git /cDocker/logstash/agent/

git clone https://gist.github.com/c897a35f955c9b1aa052.git /cDocker/elasticsearch/data/

 

Since I keep a bit different names on github (this might be subject to change in future ) we need to rename them a bit 🙂 For this you can run following commands

mv /cDocker/compose/elk_stack/docker-compose_elk_with_redis.yml  /cDocker/compose/elk_stack/docker-compose.yml

mv /cDocker/elasticsearch/data/elasticsearch_sample_conf.yml /cDocker/elasticsearch/data/elasticsearch.yml

mv /cDocker/logstash/agent/logstash_config_agent_with_redis.conf /cDocker/logstash/conf/agent.conf

mv /cDocker/logstash/central/logstash_config_central.conf /cDocker/logstash/conf/central.conf

 

Docker compose file

If you look at the code file below you will notice that we define how our image will be build. What ports will be epxosed , what links will be created amongst containers. Thanks to that machines will be created in specific order and linked accordingly, And since we have already prepared configuration files the whole stack will be ready to go.

 

Execute orchestration

Now we have everything in place to set up our first run of orechestration. Our next step is just navigating to compose folder (where our docker-compose file is ) and running following command :

/cDocker/compose/elk_stack#: docker-compose up -d

This will execute pulling of all layers and in creating of services afterwards. Once completed you should see something similar to the following :

docker_compose_elk_stack_ready_01

 

 

Summary

Well and thats it folks! We of course have much more potential to do much more (using variables / labels etc ) however we will do more funky stuff in next posts. Since Azure Files is finally in production we will use it as persistent storage in one of our future posts so stay tuned.

On subject of ready to use ELK stack we will be looking into managing input based on logstash plugins and we will see on our own eyes how this Docker ELK stack will empower our IoT automations!

 

 

 

0

Running ElasticSearch/Kibana and Logstash on Docker

In todays world if you are new to the combination of words in subject means you need to quickly catch up 😀 In IT world Docker is introducing new way to how we operate. Days when you need 20 sysadmins to make deployment successful are now long gone. You could say nowadays we get DevOps that with a click of a button change the world 😀

Today we will discuss how with Docker running  ElasticSearch + Logstash and Kibana you can visualise your environment behaviour and events. At this stage i would like to point out that this can be useful not only in IT where you get insights to what is going on with your infrastructure but also it has a great potential in the era of IoT . In a single “go” you will build required components to see its potential.

Since this will be only touching real basics I will try to point you to more interesting sources of information.

The whole excercise will be done on host running Ubuntu with the following version installed

Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty

I already have followed Docker docs on installing Docker engine on this OS. So make sure you have the engine installed.

As a quick verification this is verison of Docker running during writeup of this post

Client:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

 

So since we got that ready let’s fire up an instance of ElasticSearch. Since we would like to store data outside of the container we need to make folder somewhere on the host. Since this is only non-production excercise I will just use simple folder in root structure. For this purposes I have created folder called cDocker and within there created subfolder data/elasticsearch . This can be achieved by running the following in console

sudo mkdir -p cDocker/data/elasticsearch

Once ready we can kick off creation of our container

sudo docker run -d --name elasticsearch -p 9200:9200 -v cDocker/data/elasticsearch:/usr/share/elasticsearch/data elasticsearch

After a moment of pulling all required image layers we can see container running on our Docker host

docker_elasticSearch_createdok

 

For communicating with API you can see we have exposed port 9200. For ease of making API calls I will be using Postman addon for Chrome. With that we wlll send GET request to address of http(s)://<IP>:9200/_status which in return should come back with our instance status. In my case everything works out of the box so reply looks following

elasticsearch_api_status_ok

 

For the next part we will create LogStash container. We do this by creating container based on LogStash image. The main difference here is that we will link our elasticsearch container so they will be able to talk to each other.

docker run -d --name logstash -p 25826:25826 -p 25826:25826/udp -v $(pwd)/conf:/conf --link elasticsearch:db logstash logstash -f /conf/first.conf

In the above we expose port 25826 TCP/UDP and mount volume for configuration (here I use $(pwd) for existing folder in my current console session ) . Next we link our elasticsearch container and give it db alias.  Remaining is the name of the image and initial command to be executed.

Now if you paid close attention i specified that we will be using config file called first.conf since that file does not exist we must create it. Contents of those file come directly from Logstash documentation and are real basic configuration enabling us to see working solution

Now if I open 2 session windows – one to tail logstash container logs and other one to create a telnet connection to 25826 then we will see message I type into telnet session will get translated and forwarded to elasticsearch.

logstash_testmessage_ok

 

Of course this kind of configuration in this instance is only good for excercise and shows quickly how nicely we can get the system running

So since thats ready it’s time to set up Kibana. Its quite easy using the default image from Docker Hub . I have choosen to link containers for ease of this excercise

docker run --name kibana -e --link elasticsearch:elasticsearch -p 5601:5601 -d kibana

And now seconds later we can login to our Kibana server and take a look on our forensic details 🙂 The message we sent before as a test message is already visible! How cool is that 😀 ?

 

 

kibana_firstevent_test

Lets add some extra fake messages to make some visualisation of it. I will be doing that using telnet command and sending some dummy messages to logstash

After thats done 🙂 we can then create visualizations – and from there onwards … so awesome dashboards. For purposes of this excercise I have just created basic pie charts to show you how it can look like. Of course there is much more power there and you should explore resources available for this if you want to do more 😀

kibana_firstdashboardtest

 

Well that concludes this short introduction to logging with ELK stack. There are of course a lot of other considerations when setting this up for production. Using redis to avoid bottleneck with lost messages / avoid complex message parsing etc. We will try to look into some of those in upcoming posts!