1

Docker compose: error while loading shared libraries libz.so.1

I recently got very annoying error on freshly installed CentOS 7 machine when trying to use most up to date docker-compose ( at the moment of writing 1.6.2 ).

Error stated the following when trying to execute compose file :

docker-compose up -d
docker-compose: error while loading shared libraries: libz.so.1: failed to map segment from shared object: Operation not permitted

 

So temporarly I decided to disable SElinux however this has not helped and logs were not helpfull as well in this instance. So after a bit of wondering around on internet I came across this github issue and I tried one of the workarounds which worked in my instance.

Solution was to remount tmp with exec permission by executing :

sudo mount /tmp -o remount,exec

 

0

Ansible role for Redhat 7 CIS baseline

A Compliance fuel gauge with needle pointing to Follow the Rules to illustrate being compliant with regulations, guidelines and standards

Intro

If you are working with environments where certain policies and rules needs to be applied something like CIS baselines will be well known to you.

So it works on basis where you define which points you will apply to your system and from that point onwards you are expected to deliver proof that this is how ur systems are now compliant (or not ) and if you do not apply certain settings what is the reason for it .

However the problem comes when you need to enforce this compliancy on multiple systems and make sure they are all happily running this policies.

Automation:

And here comes the really good part – where you take a configuration management tool like Ansible and create a reusable piece of code which defines your infrastructure. Although looking at CIS baseline documents – if you are to start from zero that would be a lot of work … but …. good friend of mine has spent his time preparing CIS baseline for Redhat 7 which is no available on github in his repository HERE 🙂

 

And for much more interesting info you can always look at his blog under https://blog.verhaar.io

 

Screenshot 2016-03-22 23.07.16

 

 

 

 

0

Adobe Fusion 360 – Scaling SVG imported text

Passion to technology when truly is not limited only to server systems , docker , sccm and automation. It reaches beyond 🙂 and that’s how it looks from my side. I’m interested in much more and today I will post information about using Fusion 360.

If you have not heard about this tool – then go ahead and take a look here . It’s amazing in my opinion how now with this CAM and CAD operations have been simplified. I must say that as a total noob I was able to create customised enclosures without any stress ( although I believe there is a great potential of learning curve there 😀 )

One of my initial problems was scaling SVG text which I was importing into Fusion… However I have found great post on Fusion 360 forums which you may find here.

It basically says that if you need to scale you can apply the following ( quoted from original post) :

 

After some more research i’ve fond this.
It looks like some SVGs are in points per inch 1/72″ = 0.01388  and some in pixels per inch 1/90″ = 0.01111.
Working with points 1/72, if a file is saved as 1″ you’d scale by 0.0138888 and working in mm and you want to import as 25.4mm you’d scale by 0.35277777777778.
Working with pixels 1/90, if a file is saved as 1″ you’d scale by 0.01111111 and working in mm and you want to import as 25.4mm you’d scale by0.28222222222222.
Mark

 

So you may ask how does it look in practice ? Then take a peak here .. I have text which is 80mm long and I’m going to import it with appropriate scaling  which in my case is 0.28222222222222 ( as I’m using mm ) and once done I can start immediately working with it as object 😀 

Screeny Shot 22 Mar 2016 08.23.46

 

I’m sure I will post some more information along as I will be going through this.I hope this will help you to get around text with Fusion 360 🙂

 

1

Docker – ELK with compose v2


This post contains information which are based on the following entry:

Docker compose and ELK – Automate the automated deployment

To get idea how much has changed it’s worth of checking that out 🙂


docker_pack

If you are working with Docker then for sure you are for non stop challenging and interesting times. And since Docker is so actively developed you cannot just make a solution and ‘forget about it’ – you would just miss so much of innovation.

So since I created my ELK stack previously with Docker compose I decided that it is finally good time to move it to the compose v2!

 

 

If you have not heard about breaking changes then there is a quite nice blog post on docker blog where you can get all info that will start you going. To avoid looking all over internet here is the link 

So once you get idea how cool things now can be done we can get things going. We will start off by getting files from Github repository. This time it differs a bit from the previous posts – as then you could get a version of repo which did not have a stable version or just refused to work for some whatever reason. I have used tags on specific version which allows you to get to a specific version of code – in a nutshell it will work 😀

so let’s get to it 😀

git clone https://github.com/RafPe/docker-elk-stack.git
git checkout tags/v2.0.0

Once you have this you can just start it off by typing

docker-compose up -d

This will commence creating containers which gives the following output:

Screenshot 2016-03-07 22.59.48

 

Let’s see if we have all containers running correctly by checking logs :

docker-compose logs

You probably will get similar output as the following:

Screenshot 2016-03-07 23.01.59

 

And thats basically how you would go about creating the stack with default setup – but if you would like to tweak some settings you can check out the following:

Logging:

limited the logging drivers file size and roll over by using the following parts of compose file

logging:
driver: json-file
options:
max-size: "50m"
max-file: "3"
labels: "kibana"

 

Elasticsearch data persistence:

As for most of development tasks I do not use persistent data if you would like to have this for Elasticsearch cluster you will have to change the following line in compose file by specyfing where to store the data

volumes:
# - ${PWD}/elasticsearch/data:/usr/share/elasticsearch/data

 

Logstash configuration:

By default logstash will use demo-logstash.conf which is configured just with beats input and some filtering applied. Once completed data will be sent to elasticsearch. There are more logstash ready config files under ./logstash folder so feel free to explore and possibly use.

 

 

If you would have any comments – leave them behind as I’m interested on your approach as well 😀

 

0

Ansible – IP addresses of all nodes in group

I have been searching this for a while especially when I was setting up GlusterFS. Challenge of getting all IPs of hosts within my group of ansible.

Maybe not the prettiest and elegant solution however does perfectly what is it expected from. Sets variable of IP addresses ( in my example I’m using eth0 address of IPv4 ) and join them into comma seperated result.

 

 

If you have a better approach – please leave a comment as I’m quite interested to read how you tackle this challenge 🙂

 

0

Atom cheatsheet

atom-logoIf you are like me and appreciate tools which enables you to work with highlighting multiple standards syntax and as well enable you to be quick and efficient then I recommend using atom.io

And since we want to be as fast as possible below you can find cheatsheet that I have came across of.

 

 

Screenshot 2016-03-01 21.13.14

 

 

0

C# – Active Directory changes synchronization with cookie

c-shIn recent post we have discussed how to track Active Directory changes effeciently with PowerShell .

Now the same thing we can achieve with C#. And if you would wonder why C# since we have had it already in PowerShell ? Well maybe you would be writing a form of REST API for your enterprise ? Or writing application for personnel who is not fluent with scripting ( the ppl that do use GUI 🙂  )

Neverless this is going to be nice and easy. I will not be using screenshoots of Visual Studio in this post but just providing you with the information needed.

 

The architecture and design is totally up to you 🙂 I will introduce you to basics needed to put the bits and pieces together. To hold information which we receive it would be best to create a class with properties we will be interested in and hold that in a list.

public class adresult
{
   string objName {get;set;}
   string objDN   {get;set;}
   ...
   string objXYZ  {get;set;} # Whatever else properties you would be interested in 
}

 

That was easy 🙂 Now let’s get to write our application. I focus here on console application but you can you whatever else type suitable for you.

Let’s prepare LDAP connections :

                string ldapSrv = "LDAP://<LDAP-path>";
                string ldapFilter = "(objectClass=user)";

                // File to store our cookie
                string ldapCookie = @"c:\adsync-cookie.dat";

                // set up search
                DirectoryEntry dir = new DirectoryEntry(ldapSrv);
                DirectorySearcher searcher = new DirectorySearcher(dir);

                searcher.Filter = ldapFilter;
                searcher.PropertiesToLoad.Add("name");
                searcher.PropertiesToLoad.Add("distinguishedName");
                searcher.SearchScope = SearchScope.Subtree;
                searcher.ExtendedDN = ExtendedDN.Standard;

 

Next is the interesting – which is synchronization object

// create directory synchronization object
DirectorySynchronization sync = new DirectorySynchronization();

// check whether a cookie file exists and if so, set the dirsync to use it
if (File.Exists(ldapCookie))
   {
      byte[] byteCookie = File.ReadAllBytes(ldapCookie);
      sync.ResetDirectorySynchronizationCookie(byteCookie);
   }

 

Lastly is combining of what we have prepared and executing search

// Assign previously created object to searcher 
searcher.DirectorySynchronization = sync;

// Create group of our objects
List<adresult> ADresults = new List<adresult>();

foreach (SearchResult result in searcher.FindAll())
  {
      adresult objAdresult = new adresult();
      objAdresult.Objname  = (string)result.Properties["name"][0];
      
      string[] sExtendedDn = ((string)result.Properties["distinguishedName"][0]).Split(new Char[] { ';' });
      objAdresult.objDN    = sExtendedDn[2];

      ADresults.Add(objAdresult);
   }

// write new cookie value to file
File.WriteAllBytes(ldapCookie, sync.GetDirectorySynchronizationCookie());

// Return results 
return ADresults;

 

This concludes this short post. I hope you would be able to use it for your complex Active Directory scenarios.

 

 

0

Docker Owncloud container with LDAP support

Since we already talked about using Azure Files for storage and for docker why not make your own storage use of it – something like ‘Dropbox’ :)

Product I’m referring here is called owncloud and I wont be spending time to tell you what PRO/CONs you have. In my case I wanted to use it to share files with friends and family so I decided to set up own container.

Choosen for Docker and there was a suprise :)  When using the default image there was no LDAP support. So I have just added it to a docker file and created a gist of it.

 

 


 

 

 

 

0

Ansible – ‘DEFAULT_DOCKER_API_VERSION’ is not defined

Working on daily basis with DevOps causes you to automate a lot of work. Now one of recent orchestration tools by me is Ansible. I’m not saying THIS IS THE TOOL to go :) but it for sure have a lot of potential.

So I decided to use it to deploy services also leveraging docker … but then I received error message :

 

NameError: global name ‘DEFAULT_DOCKER_API_VERSION’ is not defined

  - name: install the required packages
    apt: name=python-pip state=present update_cache=yes
  - name: Install docker-py as a workaround for Ansible issue
    pip: name=docker-py version=1.2.3

 

 

0

HA Nginx in Docker with keepaliveD

Do you need to create HA proxy and thinking of Nginx ? Or maybe thinking even further … about Nginx and docker ? So something really simple and what you can defenitely take to next level.

In my scenario I need something which you could call … hmmm a “service gateway” ?! Which i a nutshell is solution which exposes HA loadbalancer ( and in future also DNS with connection to Consul ).

Raw and basic design could look as follow :

keepalliveD - basic design

So what we have here are 2 hosts that will expose a VIP address. So nothing really edge cutting right 😀 And as recently I work a lot with Ubuntu following steps are geared towards that OS.

Installing:

installation is really plain simple. You can use APT to get package installed by running:

apt-get update && apt-get install keepalived

 

And thats it for installation part 🙂 nothing like quick install 🙂

 

Configuring:

Configuration is something on which you can spend some more time tuning it to your needs. It does have a lot of options and I recommend just do a bit of reading. I will highlight here only bare metal basics to get you running. But complex scenarios are well possible.

Also you will notice that I do not use multicasting but switched to unicast

 

First thing which you want to configure is binding settings ( otherwise we want to be able to get solution running )

echo "net.ipv4.ip_nonlocal_bind=1" >> /etc/sysctl.conf
sysctl -p

 

Next we create configuration file for our service

vi /etc/init/keepalived.conf

 

and once thats done you can paste contents of the snipet below

description "lb and ha service"

start on runlevel [2345]
stop on runlevel [!2345]

respawn

exec /usr/local/sbin/keepalived --dont-fork

 

Once done  we create config file /etc/keepalived/keepalived.conf ( on our first node )

vrrp_instance VI_1 {
        interface eth0
        state MASTER
        virtual_router_id 91
        priority 100
        virtual_ipaddress {
            # YOUR VIP ADDRESS # 
        }
        unicast_src_ip #YOUR-1st-NODE-ADDRESS
        unicast_peer {
         #YOUR-2nd-NODE-ADDRESS
        }
}

 

And on the other machine you place the same config but switch addresses in unicast source and peer

vrrp_instance VI_1 {
        interface eth0
        state MASTER
        virtual_router_id 91
        priority 100
        virtual_ipaddress {
            # YOUR VIP ADDRESS # 
        }
        unicast_src_ip #YOUR-2nd-NODE-ADDRESS
        unicast_peer {
         #YOUR-1st-NODE-ADDRESS
        }
}

 

More details on configuration you can find here >>>  http://www.keepalived.org/LVS-NAT-Keepalived-HOWTO.html ( I have found this link to be full of useful information )

 

Bringing service to live:

Now you might say that I have configured both as masters. But in this case first one to be online will become master.

Now on both nodes you can execute

# Start service
service keepalived start
# show ip addresses
ip addr show eth0

And on one you should see your VIP address being online. voilla! HA service running and operational

 

Testing Nginx:

Now time has come to test nginx. for purposes of this demo I have setup both machines to host docker container of nginx

HA-nginx-docker-keepalived

 

Great! So both are listening on correct VIP address. One is displaying message “Hello from Nginx-1” and second “Hello from Nginx-2”. Lets test that from client machine …

Initial request from our client machine :

Screenshot 2015-12-13 18.14.27

 

And let me know disable network interface on host-1 and once thats done we make request again

Screenshot 2015-12-13 18.17.45

 

 

The error you see here is kinda my fault (but wanted to highlight this ) since my keepaliveD service was stopped on the host. once I started the service it responded from the other host.

 

Summary:

So now whats ur options ? Well quite a lot – as you can i.e. setup glusterFS and replicate your nginx config files / or use consul – explore consul template and use that for nginx dynamic files …

If you have any interesting use case scenarios leave them in comments!