2

Docker compose and ELK – Automate the automated deployment


This post contains information which have been updated in post

docker ELK with compose v2

However to get idea of how solution works I recommend just reading through 🙂


Hello!

Its been a long time when it was a bit quiet here however there was a lot that I was busy with. And as you know its in majority of scenarios the time we are short on 🙂  Today I wanted to share with you update to one of my previous posts where we setup ELK in automated way

When I originally finished the post it of coourse ‘was working like a charm!” and then I just left it for a while and focused on couple of other projects. And recently I visited that page back as I wanted to quickly deploy ELK stack for my testing…. and then suprise – it does not work ?! Of course IT world is like a super speed train 🙂 and seems like I just stopped on a station and forgot to jump back there 🙂

So from my perspective it was a great opportunity to craft some extra bash skillks and refresh knowledge about ElasticSearch , Logstash and Kibana.

So what’s changed ?

First of all there is now one major script which gets the job done. the only thing you need to do is to specify a cluster name for elasticsearch.

Also I have added some folder existance checking so it doesnt come with dummy error msgs that folders do exist already.

How to run it now ?

Start by downloading script locally to folder under which we will create remaining folders for our components

curl -L http://git.io/vBPqC >> build_elk.sh

The -L option is there for purposes of followiing redirect (as thats what git.io is doing for us )

 

Once done you might need to change it to executable

sudo chmod +x build_elk.sh

 

And thats all 🙂 last thing to do is to execute the script with first argument being desired name of your elasticsearchcluster. Output is almost instant and promising 🙂

[email protected]~$ sudo ./build_elk.sh myelastico
Cloning into '/home/bar/compose/elk_stack'...
remote: Counting objects: 26, done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 26 (delta 7), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (26/26), done.
Checking connectivity... done.
Cloning into '/home/bar/logstash/central'...
remote: Counting objects: 17, done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 17 (delta 4), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (17/17), done.
Checking connectivity... done.
Cloning into '/home/bar/logstash/agent'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 1), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (8/8), done.
Checking connectivity... done.
Cloning into '/home/bar/elasticsearch/config'...
remote: Counting objects: 8, done.
remote: Compressing objects: 100% (8/8), done.
remote: Total 8 (delta 0), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (8/8), done.
Checking connectivity... done.
Creating redis-cache
Creating elasticsearch-central
Creating logstash-central
Creating kibana-frontend
Creating logstash-agent

 

Lets check docker deamon if our containers are indeed running …

[email protected]:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                                NAMES
ff4f41753d6e        logstash:latest     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:25827->25827/udp, 0.0.0.0:25827->25827/tcp   logstash-agent
be97a16cdb1e        kibana:latest       "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:5601->5601/tcp                               kibana-frontend
d3535a6d9df8        logstash:latest     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:25826->25826/tcp, 0.0.0.0:25826->25826/udp   logstash-central
38b47ffbb3e7        elasticsearch:2     "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes        0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp       elasticsearch-central
100227df0b50        redis:latest        "/entrypoint.sh redis"   2 minutes ago       Up 2 minutes        0.0.0.0:6379->6379/tcp                               redis-cache
[email protected]:~$

 

They all do 🙂 thats took less than second (altough I had the images already on my host … ) and if we just check browser ?

kibana_2

 

 

And if anything changes ? Well then this is all in git … 🙂 so just pull for changes and you will defenitely get the most up to date version. But maybe you have some suggstions or improvements ? Then just push them – I’m sure it would be beneficial 🙂

 

Below is the view on the gIst 🙂

0

Configure WSMan connectivity on vyos firewall

This is just a quick write up. When working with secure environements it might be necessary to open some firewall ports.  If by anychance you are looking how to do that on vyos firewall below you find details.

Enter configuration mode :

# Enter configuration mode
Config

Create new rule

# Port Group 
set firewall group port-group WSMan port '5985-5986'

# Set the rule
set firewall name some-name rule 666 action 'accept'
set firewall name some-name rule 666 description 'Allow for PowerShell remoting'
set firewall name some-name rule 666 destination group network-group AllNetworks
set firewall name some-name rule 666 destination group port-group 'WSMan'
set firewall name some-name rule 666 protocol 'tcp'
set firewall name some-name rule 666 source group address-group 'my-managemnt-servers'

 

Now above might require short explanation :

  • First we create a port group called WSMAN
  • Then we create rule 666 which will allow for powershell remoting
  • It will be allowed to network group defined in AllNetworks  ( defining it is beyond the scope of this short post so you can always find it in documentation http://vyos.net/wiki/User_Guide )
  • We specify we will allow port group defined earlier (so in our case WsMan ports )
  • Its type of TCP
  • and lastly we say that source of this will be my management servers defined as address group with name my-management-servers ( again 🙂 I will refer you to wiki how to create those )

One you are done with those you need to make sure configuration is commited. This is done by calling the following :

Commit

 

If there are no validation errors just save the config 🙂 so you are not suprised that after reboot it does not work 🙂

save

 

Enjoy securing your networks 🙂

0

PowerShell – Azure Resource Manager policies

Microsoft does not stop listening to people. Many of IT professionals is heavily using Azure Resource Manager and the natural course of action is to require better control over what can and what cannot be done.

Simple as it may sound Microsoft has now offered ARM policies.  You may find details from 23:22 min on video below

 

From the good side Microsoft has already prepared documentation for us which is waiting here.

Is it difficult ? I personally think it is not – altough there is no GUI but who from Engineers this days uses GUI 🙂 you have option to use either REST API or PowerShell cmdlets (communicating over that API 🙂 )

What polciies gives me control over ? It is build over the following principal :

{
  "if" : {
    <condition> | <logical operator>
  },
  "then" : {
    "effect" : "deny | audit"
  }
}

As you can see we define conditions and operators and based on that we take action like allow or deny.

 

At the moment I’m not droping any extra examples – as documentation have already couple of them – so you might to try them out as you read the details.

 

Happy automating 🙂