Docker compose v2 – using static network addresses

Docker compose is a really great piece of code 🙂 that will allow you to build better orchestration with your containers. Recent breaking releases introduced a lot of features. While looking at some of them I was wondering about situations in which you build more (or a bit less ) complex containers based environment and do not have service discovery. In some instances you would just like to have static IP addresses.

Now this is perfectly easy to be done when running containers with cli … but how do you do that with compose ? After looking at the documentation I managed to come out with the following

And this is allowing me to specify static IP addresses for my containers using the compose file. For reference you can find the snippet of full file below

version: '2'

       image: haproxy:latest
          - "80:80"
          - "443:443"
          - ${PWD}/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
       restart: always
          - widgets
        driver: json-file
          max-size: "100m"
          max-file: "3"
          labels: "haproxy"

       image: mariadb:latest
          - /vol/appdata/mariadb:/var/lib/mysql
          - MYSQL_ROOT_PASSWORD=secret-pw
       restart: always
          - widgets
         driver: json-file
           max-size: "100m"
           max-file: "3"
           labels: "mariadb"

       image: apache:1.0
       restart: always
          - "81:80"
          - widgets
         driver: json-file
           max-size: "50m"
           max-file: "3"
           labels: "app_orangella"

    driver: bridge
       - subnet:


hope this will get you rolling with Docker compose 🙂



Docker compose: error while loading shared libraries libz.so.1

I recently got very annoying error on freshly installed CentOS 7 machine when trying to use most up to date docker-compose ( at the moment of writing 1.6.2 ).

Error stated the following when trying to execute compose file :

docker-compose up -d
docker-compose: error while loading shared libraries: libz.so.1: failed to map segment from shared object: Operation not permitted


So temporarly I decided to disable SElinux however this has not helped and logs were not helpfull as well in this instance. So after a bit of wondering around on internet I came across this github issue and I tried one of the workarounds which worked in my instance.

Solution was to remount tmp with exec permission by executing :

sudo mount /tmp -o remount,exec



Docker – ELK with compose v2

This post contains information which are based on the following entry:

Docker compose and ELK – Automate the automated deployment

To get idea how much has changed it’s worth of checking that out 🙂


If you are working with Docker then for sure you are for non stop challenging and interesting times. And since Docker is so actively developed you cannot just make a solution and ‘forget about it’ – you would just miss so much of innovation.

So since I created my ELK stack previously with Docker compose I decided that it is finally good time to move it to the compose v2!



If you have not heard about breaking changes then there is a quite nice blog post on docker blog where you can get all info that will start you going. To avoid looking all over internet here is the link 

So once you get idea how cool things now can be done we can get things going. We will start off by getting files from Github repository. This time it differs a bit from the previous posts – as then you could get a version of repo which did not have a stable version or just refused to work for some whatever reason. I have used tags on specific version which allows you to get to a specific version of code – in a nutshell it will work 😀

so let’s get to it 😀

git clone https://github.com/RafPe/docker-elk-stack.git
git checkout tags/v2.0.0

Once you have this you can just start it off by typing

docker-compose up -d

This will commence creating containers which gives the following output:

Screenshot 2016-03-07 22.59.48


Let’s see if we have all containers running correctly by checking logs :

docker-compose logs

You probably will get similar output as the following:

Screenshot 2016-03-07 23.01.59


And thats basically how you would go about creating the stack with default setup – but if you would like to tweak some settings you can check out the following:


limited the logging drivers file size and roll over by using the following parts of compose file

driver: json-file
max-size: "50m"
max-file: "3"
labels: "kibana"


Elasticsearch data persistence:

As for most of development tasks I do not use persistent data if you would like to have this for Elasticsearch cluster you will have to change the following line in compose file by specyfing where to store the data

# - ${PWD}/elasticsearch/data:/usr/share/elasticsearch/data


Logstash configuration:

By default logstash will use demo-logstash.conf which is configured just with beats input and some filtering applied. Once completed data will be sent to elasticsearch. There are more logstash ready config files under ./logstash folder so feel free to explore and possibly use.



If you would have any comments – leave them behind as I’m interested on your approach as well 😀



Docker Owncloud container with LDAP support

Since we already talked about using Azure Files for storage and for docker why not make your own storage use of it – something like ‘Dropbox’ :)

Product I’m referring here is called owncloud and I wont be spending time to tell you what PRO/CONs you have. In my case I wanted to use it to share files with friends and family so I decided to set up own container.

Choosen for Docker and there was a suprise :)  When using the default image there was no LDAP support. So I have just added it to a docker file and created a gist of it.








Logstash – Filtering Vyos syslog data

logstash-logoHey , So in last days/weeks 🙂 I work quite a lot with ELK stack. Especially in getting data from my systems into Elastic. There would not be any problem if not the fact that default parsing did not quite do work. But what would be IT life without challenges ?

So in this post I will explain in short how I have overcome this problem. And I’m sure you would be able to use this or event make it better.

We will look into following:

* Incoming raw data

* Creating filter

* Enjoying results


Incoming raw data:

So you got your vyos box doing the hard work on the edge of your network. And now you would like to have control when someone is knocking to your door or to find root cause when troubleshooting firewall rules.

Example of incoming data from my box looks similar to the following :

<4>Dec  6 01:36:00 myfwname kernel: [465183.670329] 
[internet_local-default-D]IN=eth2 OUT= 
SRC= DST= LEN=64 TOS=0x00 
PREC=0x00 TTL=56 ID=10434 DF PROTO=TCP 
SPT=51790 DPT=80 WINDOW=65535 RES=0x00 SYN URGP=0

and if we just apply basic syslog filtering it will not give out all required fields. The problem we challenge here is we need to get out internet rule. Then following that we can see use case for key value filter.


Creating filter:

So time has come to use some magical skills of creating configuration for Logstash filter. I would like to put stress that using different approaches can have impact on performance. Both negative and positive. As yet I’m getting familiar with Logstash this might not be the best solution but I will definitely explore this.

You will notice that my filters use conditional statements so I do not process data unnecessary. In my case vyoss traffic is tagged as syslog and contains specific string in the message.

So without further bubbling … We begin with parsing out data from the message that will get for sure extracted.

  if [type] == "syslog" and [message] =~ "myfw" {
    grok {
      break_on_match => false
      match => [
      "message",  "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: \[(?<syslog_pid>.*?)\] %{GREEDYDATA:syslog_message}",
      "syslog_message", "\[(?<firewall_rule>.*?)\]"

Points of interest here :

  • Grok filter DOES NOT break on match
  • We do match on message and further on extracted syslog_message to get our firewall rule from [ ]

Next we will do changes on fly to our fields using mutate

    # mutate our values
    mutate {
      add_field => [ "event_type", "firewall" ]
      rename => { "firewall_rule" => "[firewall][rule]" }
      gsub => [ "message", "= ", "=xxx" ]            # Here we remove scenario where this value is empty

Points of interest :

  • I add a field event_type  called firewall so in future I would be able to quickly query for those events.
  • I rename my previous field ( firewall_rule ) to nested field
  • And lastly I use gsub to mitigate problem of missing values in key pair

Once this is done I extract remaining values using kv filter which is configured as follow :

    # Apply key value pair
    kv {
      include_keys => ["SRC","DST","PROTO","IN","MAC","SPT","DPT"]
      field_split => " \[\]"
      add_field => {
        "[firewall][source_address]" => "%{SRC}"
        "[firewall][destination_address]" => "%{DST}"
        "[firewall][protocol]" => "%{PROTO}"
        "[firewall][source_port]" => "%{SPT}"
        "[firewall][destination_port]" => "%{DPT}"
        "[firewall][interface_in]" => "%{IN}"
        "[firewall][mac_address]" => "%{MAC}"

Points of interest :

  • I use include_keys so only fields in array will be extracted ( positive impact on performance )
  • I tried field_split to help out with one of previous challenges but that did not make a lot of difference
  • And lastly I specify my new nested fields for extracted values


So thats it! The complete file looks following :

filter {
  if [type] == "syslog" and [message] =~ "vmfw" {
    grok {
      break_on_match => false
      match => [
      "message",  "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: \[(?<syslog_pid>.*?)\] %{GREEDYDATA:syslog_message}",
      "syslog_message", "\[(?<firewall_rule>.*?)\]"
    # mutate our values
    mutate {
      add_field => [ "event_type", "firewall" ]
      rename => { "firewall_rule" => "[firewall][rule]" }
      gsub => [ "message", "OUT= MAC=", "MAC=" ]            # Here we remove scenario where this value is empty

    # Apply key value pair
    kv {
      include_keys => ["SRC","DST","PROTO","IN","MAC","SPT","DPT"]
      field_split => " \[\]"
      add_field => {
        "[firewall][source_address]" => "%{SRC}"
        "[firewall][destination_address]" => "%{DST}"
        "[firewall][protocol]" => "%{PROTO}"
        "[firewall][source_port]" => "%{SPT}"
        "[firewall][destination_port]" => "%{DPT}"
        "[firewall][interface_in]" => "%{IN}"
        "[firewall][mac_address]" => "%{MAC}"


Enjoying results:

We now would need to test this if it really works as we expect it to work. For this to check we of course will use Docker

First I will create the afore mentioned config file and name it logstash.conf . Once thats done we bring up container up with the following command :

docker run -d -p 25666:25666 -v "$PWD":/config-dir logstash logstash -f /config-dir/logstash.conf

This creates container for me which I can then test locally. Now for this to work you need input source ( i.e. tcp / udp and stdout i.e. codec ruby )

Then I will split my screen using tmux and will execute request while looking at results from docker logs



And thats it! You have beautifully working parsing for your vyos box ! If you have any comments / or improvements – feel free to share!




Docker compose and ELK – Automate the automated deployment

This post contains information which have been updated in post

docker ELK with compose v2

However to get idea of how solution works I recommend just reading through 🙂


Its been a long time when it was a bit quiet here however there was a lot that I was busy with. And as you know its in majority of scenarios the time we are short on 🙂  Today I wanted to share with you update to one of my previous posts where we setup ELK in automated way

When I originally finished the post it of coourse ‘was working like a charm!” and then I just left it for a while and focused on couple of other projects. And recently I visited that page back as I wanted to quickly deploy ELK stack for my testing…. and then suprise – it does not work ?! Of course IT world is like a super speed train 🙂 and seems like I just stopped on a station and forgot to jump back there 🙂

So from my perspective it was a great opportunity to craft some extra bash skillks and refresh knowledge about ElasticSearch , Logstash and Kibana.

So what’s changed ?

First of all there is now one major script which gets the job done. the only thing you need to do is to specify a cluster name for elasticsearch.

Also I have added some folder existance checking so it doesnt come with dummy error msgs that folders do exist already.

How to run it now ?

Start by downloading script locally to folder under which we will create remaining folders for our components

curl -L http://git.io/vBPqC >> build_elk.sh

The -L option is there for purposes of followiing redirect (as thats what git.io is doing for us )


Once done you might need to change it to executable

sudo chmod +x build_elk.sh


And thats all 🙂 last thing to do is to execute the script with first argument being desired name of your elasticsearchcluster. Output is almost instant and promising 🙂

[email protected]~$ sudo ./build_elk.sh myelastico
Cloning into '/home/bar/compose/elk_stack'...
remote: Counting objects: 26, done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 26 (delta 7), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (26/26), done.
Checking connectivity... done.
Cloning into '/home/bar/logstash/central'...
remote: Counting objects: 17, done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 17 (delta 4), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (17/17), done.
Checking connectivity... done.
Cloning into '/home/bar/logstash/agent'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 1), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (8/8), done.
Checking connectivity... done.
Cloning into '/home/bar/elasticsearch/config'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (8/8), done.
Checking connectivity... done.
Creating redis-cache
Creating elasticsearch-central
Creating logstash-central
Creating kibana-frontend
Creating logstash-agent


Lets check docker deamon if our containers are indeed running …

[email protected]:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                NAMES
ff4f41753d6e        logstash:latest     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes>25827/udp,>25827/tcp   logstash-agent
be97a16cdb1e        kibana:latest       "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes>5601/tcp                               kibana-frontend
d3535a6d9df8        logstash:latest     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes>25826/tcp,>25826/udp   logstash-central
38b47ffbb3e7        elasticsearch:2     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes>9200/tcp,>9300/tcp       elasticsearch-central
100227df0b50        redis:latest        "/entrypoint.sh redis"   2 minutes ago       Up 2 minutes>6379/tcp                               redis-cache
[email protected]:~$


They all do 🙂 thats took less than second (altough I had the images already on my host … ) and if we just check browser ?




And if anything changes ? Well then this is all in git … 🙂 so just pull for changes and you will defenitely get the most up to date version. But maybe you have some suggstions or improvements ? Then just push them – I’m sure it would be beneficial 🙂


Below is the view on the gIst 🙂


Docker compose and ELK – setup in automated way

docker-compose-logo-01Altough originally this was supposed to be short post about setting up ELK stack for logging. However with every moment I have been working with this technology it got me really ‘insipired’ and  I thought it would be worth to start and make it working the right way from the very beggining


Now since we are up for automating things we wil try to make use of docker compose which will allow us to setup whole stack in automated way. Docker compose is detailed in here

Compose in short allows you to describe how your services will look like and how do they interact with each other (volumes/ports/links).

In this post we will be using docker + docker-compose on Ubuntu host running in Azure. If you would be wondering why I just show my IP addresses all the time on the screenshots … because those are not load balanced static IP addresses. So every time I spin a host I get a new one 🙂


This post contains information which have been updated in post

Docker compose and ELK – Automate the automated deployment

However for gettign idea of how solution works I recommend just reading through 🙂



Installing Docker-compose

So the first thing we need to do is to install docker-compose. Since as we all now docker is under constant development it is easiest to give you link to gitHub release page rather than direct link which can be out of date

Once installed you can use the following command to make sure it is installed :

docker-compose --version


Preparing folder structure

Since we will be using config files and storing elasticsearch data on the host we will need to setup folder structure. I’m aware that this can be done better with variables 🙂 but ubuntu is still learning curve so I will leave it up to you to find better ways 🙂 In the meantime let’s run the following command

sudo mkdir -p /cDocker/elasticsearch/data
sudo mkdir -p /cDocker/logstash/conf
sudo mkdir -p /cDocker/logstash/agent
sudo mkdir -p /cDocker/logstash/central
sudo mkdir -p /cDocker/compose/elk_stack


Clone configuration files

Once you have the folder structure we will prepare our config files. To do this we will be cloning gitHub repository (gists ) which I have prepared in advance (and tested as well of course ) .

git clone https://gist.github.com/60c3d7ff1b383e34990a.git /cDocker/compose/elk_stack

git clone https://gist.github.com/6627a2bf05ff956a28a9.git /cDocker/logstash/central/

git clone https://gist.github.com/0cd6594672ebfe1205a5.git /cDocker/logstash/agent/

git clone https://gist.github.com/c897a35f955c9b1aa052.git /cDocker/elasticsearch/data/


Since I keep a bit different names on github (this might be subject to change in future ) we need to rename them a bit 🙂 For this you can run following commands

mv /cDocker/compose/elk_stack/docker-compose_elk_with_redis.yml  /cDocker/compose/elk_stack/docker-compose.yml

mv /cDocker/elasticsearch/data/elasticsearch_sample_conf.yml /cDocker/elasticsearch/data/elasticsearch.yml

mv /cDocker/logstash/agent/logstash_config_agent_with_redis.conf /cDocker/logstash/conf/agent.conf

mv /cDocker/logstash/central/logstash_config_central.conf /cDocker/logstash/conf/central.conf


Docker compose file

If you look at the code file below you will notice that we define how our image will be build. What ports will be epxosed , what links will be created amongst containers. Thanks to that machines will be created in specific order and linked accordingly, And since we have already prepared configuration files the whole stack will be ready to go.


Execute orchestration

Now we have everything in place to set up our first run of orechestration. Our next step is just navigating to compose folder (where our docker-compose file is ) and running following command :

/cDocker/compose/elk_stack#: docker-compose up -d

This will execute pulling of all layers and in creating of services afterwards. Once completed you should see something similar to the following :





Well and thats it folks! We of course have much more potential to do much more (using variables / labels etc ) however we will do more funky stuff in next posts. Since Azure Files is finally in production we will use it as persistent storage in one of our future posts so stay tuned.

On subject of ready to use ELK stack we will be looking into managing input based on logstash plugins and we will see on our own eyes how this Docker ELK stack will empower our IoT automations!





Running ElasticSearch/Kibana and Logstash on Docker

In todays world if you are new to the combination of words in subject means you need to quickly catch up 😀 In IT world Docker is introducing new way to how we operate. Days when you need 20 sysadmins to make deployment successful are now long gone. You could say nowadays we get DevOps that with a click of a button change the world 😀

Today we will discuss how with Docker running  ElasticSearch + Logstash and Kibana you can visualise your environment behaviour and events. At this stage i would like to point out that this can be useful not only in IT where you get insights to what is going on with your infrastructure but also it has a great potential in the era of IoT . In a single “go” you will build required components to see its potential.

Since this will be only touching real basics I will try to point you to more interesting sources of information.

The whole excercise will be done on host running Ubuntu with the following version installed

Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty

I already have followed Docker docs on installing Docker engine on this OS. So make sure you have the engine installed.

As a quick verification this is verison of Docker running during writeup of this post

Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64


So since we got that ready let’s fire up an instance of ElasticSearch. Since we would like to store data outside of the container we need to make folder somewhere on the host. Since this is only non-production excercise I will just use simple folder in root structure. For this purposes I have created folder called cDocker and within there created subfolder data/elasticsearch . This can be achieved by running the following in console

sudo mkdir -p cDocker/data/elasticsearch

Once ready we can kick off creation of our container

sudo docker run -d --name elasticsearch -p 9200:9200 -v cDocker/data/elasticsearch:/usr/share/elasticsearch/data elasticsearch

After a moment of pulling all required image layers we can see container running on our Docker host



For communicating with API you can see we have exposed port 9200. For ease of making API calls I will be using Postman addon for Chrome. With that we wlll send GET request to address of http(s)://<IP>:9200/_status which in return should come back with our instance status. In my case everything works out of the box so reply looks following



For the next part we will create LogStash container. We do this by creating container based on LogStash image. The main difference here is that we will link our elasticsearch container so they will be able to talk to each other.

docker run -d --name logstash -p 25826:25826 -p 25826:25826/udp -v $(pwd)/conf:/conf --link elasticsearch:db logstash logstash -f /conf/first.conf

In the above we expose port 25826 TCP/UDP and mount volume for configuration (here I use $(pwd) for existing folder in my current console session ) . Next we link our elasticsearch container and give it db alias.  Remaining is the name of the image and initial command to be executed.

Now if you paid close attention i specified that we will be using config file called first.conf since that file does not exist we must create it. Contents of those file come directly from Logstash documentation and are real basic configuration enabling us to see working solution

Now if I open 2 session windows – one to tail logstash container logs and other one to create a telnet connection to 25826 then we will see message I type into telnet session will get translated and forwarded to elasticsearch.



Of course this kind of configuration in this instance is only good for excercise and shows quickly how nicely we can get the system running

So since thats ready it’s time to set up Kibana. Its quite easy using the default image from Docker Hub . I have choosen to link containers for ease of this excercise

docker run --name kibana -e --link elasticsearch:elasticsearch -p 5601:5601 -d kibana

And now seconds later we can login to our Kibana server and take a look on our forensic details 🙂 The message we sent before as a test message is already visible! How cool is that 😀 ?




Lets add some extra fake messages to make some visualisation of it. I will be doing that using telnet command and sending some dummy messages to logstash

After thats done 🙂 we can then create visualizations – and from there onwards … so awesome dashboards. For purposes of this excercise I have just created basic pie charts to show you how it can look like. Of course there is much more power there and you should explore resources available for this if you want to do more 😀



Well that concludes this short introduction to logging with ELK stack. There are of course a lot of other considerations when setting this up for production. Using redis to avoid bottleneck with lost messages / avoid complex message parsing etc. We will try to look into some of those in upcoming posts!




Docker on Windows – Running Windows Server 2016

So without any further delays we go ahead and create our environment to play around with containers however this time we will do it on windows!

As you know with release of Windows Server 2016 TP3 we have now the ability to play around with containers on Windows host. Since Docker is under heavy development it is possible that a lot will change in RTM therefore check out for any updates to this post 😀

If you are happy automating admin like I’m probably one of the first commands you run would be ….


… 🙂 of course – the windows engineer best friend !


To get this show stared I’m using Windows Server 2016 TP3 on Azure as that gives the biggest flexibility. Microsoft have posted already some good points how to get started using Docker. That documentation (or more less technical guide ) is available here. It explains how to quickly get started.

So we start off by logging into our Windows host and starting off powershell session :



Cool thing about which I wasn’t aware is syntax highlighting (something that ppl on unix had for a while 🙂 ) which makes working with PS and its output more readable (in my opinion)

So as mentioned in my previous post you have option to manage containers with Docker (as we now know it on ubuntu i.e. ) or with PowerShell. Since I have been working with with Docker already I decided to investigate that route and leave powershell for a bit later .

Following documentation which I have linked above we can see that Microsoft have been really good and prepared for us script which will take of initial configuration and download of all necessary docker tools.

In order to download it we need to execute the following command :

wget -uri http://aka.ms/setupcontainers -OutFile C:\ContainerSetup.ps1

If you would rather to get to the source of the script its available here

Once downloaded you can just start the script and it will take care of required configuration and download of the images. Yes … downloading of those images can take a while. Its approximately ~18GB of data which have been downloaded. So you may want to start the configuration before your favourite TV show or maybe a game of football in park 😀

Once completed we have access to the goodies – we can start playing with Docker. First thing which is good to do is to check out our Docker information … easily done by

Docker info

In my case the output is following :



Out of my head what is definitely worth of investigating is logging driver ( when used a bit differently allows you to throw Docker logs to centralised system i.e. ElasticSearch … but about that a bit later 😀 ). Rest we will investigate along with this learning series of Docker on Windows.

Now what would a Docker be without images! After running that long time taking configuration process we get access to windows images prepared for us. If you have not yet been playing around with those then you can get them by issuing

docker images

With that we get the available images :


First thing to notice is that approximate size of default image is ~9,7GB which is a question in this days – is it a lot ? I think you need to answer this question by yourself 🙂 or by waiting for MS to provide a bit more details (unless those our out and I haven’t found them 🙂 ). At the moment with my experience with Docker on Ubuntu – set up of Linux host and containers is a matter of minutes. So that GB of data on Windows might be a bit of show stopper on throwing Windows hosts for Docker.

Now since we have our image it might be useful to get more detailed information. We can get them by issuing command

docker inspect <Container Id> | <Image Id>

The results are following :

    "Id": "0d53944cb84d022f5535783fedfa72981449462b542cae35709a0ffea896852e",
    "Parent": "",
    "Comment": "",
    "Created": "2015-08-14T15:51:55.051Z",
    "Container": "",
    "ContainerConfig": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": null,
        "PublishService": "",
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": null,
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "VolumeDriver": "",
        "WorkingDir": "",
        "Entrypoint": null,
        "NetworkDisabled": false,
        "MacAddress": "",
        "OnBuild": null,
        "Labels": null
    "DockerVersion": "1.9.0-dev",
    "Author": "",
    "Config": null,
    "Architecture": "amd64",
    "Os": "windows",
    "Size": 9696754476,
    "VirtualSize": 9696754476,
    "GraphDriver": {
        "Name": "windowsfilter",
        "Data": {
            "dir": "C:\\ProgramData\\docker\\windowsfilter\\0d53944cb84d022f5535783fedfa72981449462b542cae35709a0ffea89


So here we go – we will create our first container by running the following command . It will give us regular output and will run in background.

docker run -d --name firstcontainer windowsservercore powershell -command "& {for (;;) { [datetime]::now ; start-sleep -s 2} }

It gives regular output. In order to see what container is outputting you can issue command

docker logs <container Id>|<container name>


Thats all fine … but how do we do any customisations to our container ? The process is fairly simple we run a new container and make our changes. Once we are happy with changes we have implemented we can commit the changes and save our image. We will quickly explore this by creating container which would host our IIS web server.

We begin with creating a new container and entering interactive session

docker run -it --name iisbase windowsservercore powershell

Once your container is up we are directly taken to powershell session within that container. We will use the well known way to get base image configured. What we are after here is adding web server role using PS. First lets check if its definitely not installed :

Get-WindowsFeature -Name *web*


After that we will just add the web server role and then exit container. Lets issue the command for installation of role :

PS C:\> Add-WindowsFeature -Name Web-Server



Before we exit there is something which is worth of mentioning … speed of containers (at least at the moment of writing this blog where ppl at MS are still working on it 🙂 ). It can be significantly improved by removing anti malware service from you base image . This can be done by running the following command :

Uninstall-WindowsFeature -Name Windows-Server-Antimalware


Now we can exit our container by simply typing


Small thing worth of mentioning 🙂 clipboard string content paste into containers have been limited to ~50 characters which is a work in progress and should be lifted up in next releases.


Ufff so we got to the point where our container have been configured it. Its time to build image from it. This can be done (at the moment ) only on containers which are stopped. To execute commit run :

# docker commit <container Id> | <contaner Name> <RepoName> 
docker commit 5e5f0d34988a rafpe:iis 


The process takes a bit of time however once completed we have access to our new image which allows us to spin multiple containers 🙂 If you would like to evaluate the image created you can use the approach and commands discussed earlier in this post.



And as you can see our new image rafpe is slightly bigger than the base image (this is due changes we made).

Let’s go ahead and do exactly what we waited for – spin a new container base on this image

docker run -d --name webos -p 80:80 rafpe:iis

Now at the moment of writing I could not get connected on exposed port 80 from container-host by issuing something in lines of


According to information which I have found on MSDN forums people are experiencing the same behaviour . Which means (if enabled on firewall ) you can get from external systems to your container exposed port (check if you have NAT correctly set up ).


Now to add something useful if you would like to try different approach for this exercise with Docker. To find different images use the following command :

docker search iis

Uff – I think that information should get you going and please be advised that this is a learning course for me as well 🙂 so if I made some terrible misleading in information here please let me know and  I will update that.

To not leave without pointing you to some good places here are links used by me for this :


Hope you liked the article! Stay tuned for more!




Rise of Docker – the game changer technology

If you have not yet been working with Docker containers – worse… if you have not yet really hard about Docker and significant changes it brings – then you should find more information!

In simple words Docker does the thing we always wanted – it isolates applications from our hosts layer. This enables possibility of creating micro services that can be dynamically scaled / updated / re-deployed !

If you would like to imagine how this whole Docker works then I’m sure that by looking on image below you will get the grasp of the idea behind!




So things to keep in mind are that Docker is not some black magic box … it requires underlying host components to run on top of it. What I mean by that If you need to run Windows containers you will need Windows host and the same principal will apply for the Linux container. You will also need Linux Host.

Up to this point there was no real support on Docker containers on Windows. However by the time of writing this document Microsoft has released Windows Server 2016 which brings major breaking changes and primarly of our interest the support to containers!

One of the thing that Microsoft have made people aware is that you will be able to  manage containers with Docker and with Powershell … but …. yep there is a but – containers created with one cannot be managed with other one . I think thats a fair trade of but thats something that potentially is going to be changed.


In the meantime I invite you to explore docker hub and help yourself by getting more detailed information when exploring the docker docs 

In one of the next posts we will discuss how to get Windows docker container running on Windows Server 2016 (TP3 ) !  With that keep intro to Docker I hope to see you again!