0

Running ElasticSearch/Kibana and Logstash on Docker

In todays world if you are new to the combination of words in subject means you need to quickly catch up 😀 In IT world Docker is introducing new way to how we operate. Days when you need 20 sysadmins to make deployment successful are now long gone. You could say nowadays we get DevOps that with a click of a button change the world 😀

Today we will discuss how with Docker running  ElasticSearch + Logstash and Kibana you can visualise your environment behaviour and events. At this stage i would like to point out that this can be useful not only in IT where you get insights to what is going on with your infrastructure but also it has a great potential in the era of IoT . In a single “go” you will build required components to see its potential.

Since this will be only touching real basics I will try to point you to more interesting sources of information.

The whole excercise will be done on host running Ubuntu with the following version installed

Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty

I already have followed Docker docs on installing Docker engine on this OS. So make sure you have the engine installed.

As a quick verification this is verison of Docker running during writeup of this post

Client:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

 

So since we got that ready let’s fire up an instance of ElasticSearch. Since we would like to store data outside of the container we need to make folder somewhere on the host. Since this is only non-production excercise I will just use simple folder in root structure. For this purposes I have created folder called cDocker and within there created subfolder data/elasticsearch . This can be achieved by running the following in console

sudo mkdir -p cDocker/data/elasticsearch

Once ready we can kick off creation of our container

sudo docker run -d --name elasticsearch -p 9200:9200 -v cDocker/data/elasticsearch:/usr/share/elasticsearch/data elasticsearch

After a moment of pulling all required image layers we can see container running on our Docker host

docker_elasticSearch_createdok

 

For communicating with API you can see we have exposed port 9200. For ease of making API calls I will be using Postman addon for Chrome. With that we wlll send GET request to address of http(s)://<IP>:9200/_status which in return should come back with our instance status. In my case everything works out of the box so reply looks following

elasticsearch_api_status_ok

 

For the next part we will create LogStash container. We do this by creating container based on LogStash image. The main difference here is that we will link our elasticsearch container so they will be able to talk to each other.

docker run -d --name logstash -p 25826:25826 -p 25826:25826/udp -v $(pwd)/conf:/conf --link elasticsearch:db logstash logstash -f /conf/first.conf

In the above we expose port 25826 TCP/UDP and mount volume for configuration (here I use $(pwd) for existing folder in my current console session ) . Next we link our elasticsearch container and give it db alias.  Remaining is the name of the image and initial command to be executed.

Now if you paid close attention i specified that we will be using config file called first.conf since that file does not exist we must create it. Contents of those file come directly from Logstash documentation and are real basic configuration enabling us to see working solution

Now if I open 2 session windows – one to tail logstash container logs and other one to create a telnet connection to 25826 then we will see message I type into telnet session will get translated and forwarded to elasticsearch.

logstash_testmessage_ok

 

Of course this kind of configuration in this instance is only good for excercise and shows quickly how nicely we can get the system running

So since thats ready it’s time to set up Kibana. Its quite easy using the default image from Docker Hub . I have choosen to link containers for ease of this excercise

docker run --name kibana -e --link elasticsearch:elasticsearch -p 5601:5601 -d kibana

And now seconds later we can login to our Kibana server and take a look on our forensic details 🙂 The message we sent before as a test message is already visible! How cool is that 😀 ?

 

 

kibana_firstevent_test

Lets add some extra fake messages to make some visualisation of it. I will be doing that using telnet command and sending some dummy messages to logstash

After thats done 🙂 we can then create visualizations – and from there onwards … so awesome dashboards. For purposes of this excercise I have just created basic pie charts to show you how it can look like. Of course there is much more power there and you should explore resources available for this if you want to do more 😀

kibana_firstdashboardtest

 

Well that concludes this short introduction to logging with ELK stack. There are of course a lot of other considerations when setting this up for production. Using redis to avoid bottleneck with lost messages / avoid complex message parsing etc. We will try to look into some of those in upcoming posts!

 

 

1

Docker on Windows – Running Windows Server 2016

So without any further delays we go ahead and create our environment to play around with containers however this time we will do it on windows!

As you know with release of Windows Server 2016 TP3 we have now the ability to play around with containers on Windows host. Since Docker is under heavy development it is possible that a lot will change in RTM therefore check out for any updates to this post 😀

If you are happy automating admin like I’m probably one of the first commands you run would be ….

powershell.exe

… 🙂 of course – the windows engineer best friend !

 

To get this show stared I’m using Windows Server 2016 TP3 on Azure as that gives the biggest flexibility. Microsoft have posted already some good points how to get started using Docker. That documentation (or more less technical guide ) is available here. It explains how to quickly get started.

So we start off by logging into our Windows host and starting off powershell session :

ws2016tp3_ps1

 

Cool thing about which I wasn’t aware is syntax highlighting (something that ppl on unix had for a while 🙂 ) which makes working with PS and its output more readable (in my opinion)

So as mentioned in my previous post you have option to manage containers with Docker (as we now know it on ubuntu i.e. ) or with PowerShell. Since I have been working with with Docker already I decided to investigate that route and leave powershell for a bit later .

Following documentation which I have linked above we can see that Microsoft have been really good and prepared for us script which will take of initial configuration and download of all necessary docker tools.

In order to download it we need to execute the following command :

wget -uri http://aka.ms/setupcontainers -OutFile C:\ContainerSetup.ps1

If you would rather to get to the source of the script its available here

Once downloaded you can just start the script and it will take care of required configuration and download of the images. Yes … downloading of those images can take a while. Its approximately ~18GB of data which have been downloaded. So you may want to start the configuration before your favourite TV show or maybe a game of football in park 😀

Once completed we have access to the goodies – we can start playing with Docker. First thing which is good to do is to check out our Docker information … easily done by

Docker info

In my case the output is following :

ws2016tp3_docker_info

 

Out of my head what is definitely worth of investigating is logging driver ( when used a bit differently allows you to throw Docker logs to centralised system i.e. ElasticSearch … but about that a bit later 😀 ). Rest we will investigate along with this learning series of Docker on Windows.

Now what would a Docker be without images! After running that long time taking configuration process we get access to windows images prepared for us. If you have not yet been playing around with those then you can get them by issuing

docker images

With that we get the available images :

ws2016tp3_docker_images

First thing to notice is that approximate size of default image is ~9,7GB which is a question in this days – is it a lot ? I think you need to answer this question by yourself 🙂 or by waiting for MS to provide a bit more details (unless those our out and I haven’t found them 🙂 ). At the moment with my experience with Docker on Ubuntu – set up of Linux host and containers is a matter of minutes. So that GB of data on Windows might be a bit of show stopper on throwing Windows hosts for Docker.

Now since we have our image it might be useful to get more detailed information. We can get them by issuing command

docker inspect <Container Id> | <Image Id>

The results are following :

[
{
    "Id": "0d53944cb84d022f5535783fedfa72981449462b542cae35709a0ffea896852e",
    "Parent": "",
    "Comment": "",
    "Created": "2015-08-14T15:51:55.051Z",
    "Container": "",
    "ContainerConfig": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": null,
        "PublishService": "",
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": null,
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "VolumeDriver": "",
        "WorkingDir": "",
        "Entrypoint": null,
        "NetworkDisabled": false,
        "MacAddress": "",
        "OnBuild": null,
        "Labels": null
    },
    "DockerVersion": "1.9.0-dev",
    "Author": "",
    "Config": null,
    "Architecture": "amd64",
    "Os": "windows",
    "Size": 9696754476,
    "VirtualSize": 9696754476,
    "GraphDriver": {
        "Name": "windowsfilter",
        "Data": {
            "dir": "C:\\ProgramData\\docker\\windowsfilter\\0d53944cb84d022f5535783fedfa72981449462b542cae35709a0ffea89
6852e"
        }
    }
}
]

 

So here we go – we will create our first container by running the following command . It will give us regular output and will run in background.

docker run -d --name firstcontainer windowsservercore powershell -command "& {for (;;) { [datetime]::now ; start-sleep -s 2} }

It gives regular output. In order to see what container is outputting you can issue command

docker logs <container Id>|<container name>

 

Thats all fine … but how do we do any customisations to our container ? The process is fairly simple we run a new container and make our changes. Once we are happy with changes we have implemented we can commit the changes and save our image. We will quickly explore this by creating container which would host our IIS web server.

We begin with creating a new container and entering interactive session

docker run -it --name iisbase windowsservercore powershell

Once your container is up we are directly taken to powershell session within that container. We will use the well known way to get base image configured. What we are after here is adding web server role using PS. First lets check if its definitely not installed :

Get-WindowsFeature -Name *web*

ws2016tp3_docker_imagecust01

After that we will just add the web server role and then exit container. Lets issue the command for installation of role :

PS C:\> Add-WindowsFeature -Name Web-Server

ws2016tp3_roleinside_docker

 

Before we exit there is something which is worth of mentioning … speed of containers (at least at the moment of writing this blog where ppl at MS are still working on it 🙂 ). It can be significantly improved by removing anti malware service from you base image . This can be done by running the following command :

Uninstall-WindowsFeature -Name Windows-Server-Antimalware

 

Now we can exit our container by simply typing

exit

Small thing worth of mentioning 🙂 clipboard string content paste into containers have been limited to ~50 characters which is a work in progress and should be lifted up in next releases.

 

Ufff so we got to the point where our container have been configured it. Its time to build image from it. This can be done (at the moment ) only on containers which are stopped. To execute commit run :

# docker commit <container Id> | <contaner Name> <RepoName> 
docker commit 5e5f0d34988a rafpe:iis 

 

The process takes a bit of time however once completed we have access to our new image which allows us to spin multiple containers 🙂 If you would like to evaluate the image created you can use the approach and commands discussed earlier in this post.

ws2016tp3_docker_commitedimage

 

And as you can see our new image rafpe is slightly bigger than the base image (this is due changes we made).

Let’s go ahead and do exactly what we waited for – spin a new container base on this image

docker run -d --name webos -p 80:80 rafpe:iis

Now at the moment of writing I could not get connected on exposed port 80 from container-host by issuing something in lines of

curl 127.0.0.1:80

According to information which I have found on MSDN forums people are experiencing the same behaviour . Which means (if enabled on firewall ) you can get from external systems to your container exposed port (check if you have NAT correctly set up ).

 

Now to add something useful if you would like to try different approach for this exercise with Docker. To find different images use the following command :

docker search iis

Uff – I think that information should get you going and please be advised that this is a learning course for me as well 🙂 so if I made some terrible misleading in information here please let me know and  I will update that.

To not leave without pointing you to some good places here are links used by me for this :

 

Hope you liked the article! Stay tuned for more!

 

 

0

Rise of Docker – the game changer technology

If you have not yet been working with Docker containers – worse… if you have not yet really hard about Docker and significant changes it brings – then you should find more information!

In simple words Docker does the thing we always wanted – it isolates applications from our hosts layer. This enables possibility of creating micro services that can be dynamically scaled / updated / re-deployed !

If you would like to imagine how this whole Docker works then I’m sure that by looking on image below you will get the grasp of the idea behind!

DockerWithWindowsSrvAndLinux

 

 

So things to keep in mind are that Docker is not some black magic box … it requires underlying host components to run on top of it. What I mean by that If you need to run Windows containers you will need Windows host and the same principal will apply for the Linux container. You will also need Linux Host.

Up to this point there was no real support on Docker containers on Windows. However by the time of writing this document Microsoft has released Windows Server 2016 which brings major breaking changes and primarly of our interest the support to containers!

One of the thing that Microsoft have made people aware is that you will be able to  manage containers with Docker and with Powershell … but …. yep there is a but – containers created with one cannot be managed with other one . I think thats a fair trade of but thats something that potentially is going to be changed.

 

In the meantime I invite you to explore docker hub and help yourself by getting more detailed information when exploring the docker docs 

In one of the next posts we will discuss how to get Windows docker container running on Windows Server 2016 (TP3 ) !  With that keep intro to Docker I hope to see you again!

 

 

1

Road to challenges in IT

Hey ,

It has been long and quiet in last 2 years I think but this times comes to an end. A lot have been happening in regards to learning curve of SCCM/SCOM/ PowerShell  (especially DSc part) and REST Apis.

Nowadays we cannot forget about importance of cloud and hybrid environments and Docker technology!

With all of that I can assure you that from now I will be on regular basis sharing as much as possible from challenges I have came across and from the news I got from the engineering world!

As usual the primary forcus of my experience is providing advanced automation solutions with maintaining security and availability of your services (nop – not forgot -> scalability as well 😀 )

So stay tuned / fork GitHub and enjoy the automation!