1

Docker compose: error while loading shared libraries libz.so.1

I recently got very annoying error on freshly installed CentOS 7 machine when trying to use most up to date docker-compose ( at the moment of writing 1.6.2 ).

Error stated the following when trying to execute compose file :

docker-compose up -d
docker-compose: error while loading shared libraries: libz.so.1: failed to map segment from shared object: Operation not permitted

 

So temporarly I decided to disable SElinux however this has not helped and logs were not helpfull as well in this instance. So after a bit of wondering around on internet I came across this github issue and I tried one of the workarounds which worked in my instance.

Solution was to remount tmp with exec permission by executing :

sudo mount /tmp -o remount,exec

 

0

Ansible role for Redhat 7 CIS baseline

A Compliance fuel gauge with needle pointing to Follow the Rules to illustrate being compliant with regulations, guidelines and standards

Intro

If you are working with environments where certain policies and rules needs to be applied something like CIS baselines will be well known to you.

So it works on basis where you define which points you will apply to your system and from that point onwards you are expected to deliver proof that this is how ur systems are now compliant (or not ) and if you do not apply certain settings what is the reason for it .

However the problem comes when you need to enforce this compliancy on multiple systems and make sure they are all happily running this policies.

Automation:

And here comes the really good part – where you take a configuration management tool like Ansible and create a reusable piece of code which defines your infrastructure. Although looking at CIS baseline documents – if you are to start from zero that would be a lot of work … but …. good friend of mine has spent his time preparing CIS baseline for Redhat 7 which is no available on github in his repository HERE 🙂

 

And for much more interesting info you can always look at his blog under https://blog.verhaar.io

 

Screenshot 2016-03-22 23.07.16

 

 

 

 

0

Adobe Fusion 360 – Scaling SVG imported text

Passion to technology when truly is not limited only to server systems , docker , sccm and automation. It reaches beyond 🙂 and that’s how it looks from my side. I’m interested in much more and today I will post information about using Fusion 360.

If you have not heard about this tool – then go ahead and take a look here . It’s amazing in my opinion how now with this CAM and CAD operations have been simplified. I must say that as a total noob I was able to create customised enclosures without any stress ( although I believe there is a great potential of learning curve there 😀 )

One of my initial problems was scaling SVG text which I was importing into Fusion… However I have found great post on Fusion 360 forums which you may find here.

It basically says that if you need to scale you can apply the following ( quoted from original post) :

 

After some more research i’ve fond this.
It looks like some SVGs are in points per inch 1/72″ = 0.01388  and some in pixels per inch 1/90″ = 0.01111.
Working with points 1/72, if a file is saved as 1″ you’d scale by 0.0138888 and working in mm and you want to import as 25.4mm you’d scale by 0.35277777777778.
Working with pixels 1/90, if a file is saved as 1″ you’d scale by 0.01111111 and working in mm and you want to import as 25.4mm you’d scale by0.28222222222222.
Mark

 

So you may ask how does it look in practice ? Then take a peak here .. I have text which is 80mm long and I’m going to import it with appropriate scaling  which in my case is 0.28222222222222 ( as I’m using mm ) and once done I can start immediately working with it as object 😀 

Screeny Shot 22 Mar 2016 08.23.46

 

I’m sure I will post some more information along as I will be going through this.I hope this will help you to get around text with Fusion 360 🙂

 

1

Docker – ELK with compose v2


This post contains information which are based on the following entry:

Docker compose and ELK – Automate the automated deployment

To get idea how much has changed it’s worth of checking that out 🙂


docker_pack

If you are working with Docker then for sure you are for non stop challenging and interesting times. And since Docker is so actively developed you cannot just make a solution and ‘forget about it’ – you would just miss so much of innovation.

So since I created my ELK stack previously with Docker compose I decided that it is finally good time to move it to the compose v2!

 

 

If you have not heard about breaking changes then there is a quite nice blog post on docker blog where you can get all info that will start you going. To avoid looking all over internet here is the link 

So once you get idea how cool things now can be done we can get things going. We will start off by getting files from Github repository. This time it differs a bit from the previous posts – as then you could get a version of repo which did not have a stable version or just refused to work for some whatever reason. I have used tags on specific version which allows you to get to a specific version of code – in a nutshell it will work 😀

so let’s get to it 😀

git clone https://github.com/RafPe/docker-elk-stack.git
git checkout tags/v2.0.0

Once you have this you can just start it off by typing

docker-compose up -d

This will commence creating containers which gives the following output:

Screenshot 2016-03-07 22.59.48

 

Let’s see if we have all containers running correctly by checking logs :

docker-compose logs

You probably will get similar output as the following:

Screenshot 2016-03-07 23.01.59

 

And thats basically how you would go about creating the stack with default setup – but if you would like to tweak some settings you can check out the following:

Logging:

limited the logging drivers file size and roll over by using the following parts of compose file

logging:
driver: json-file
options:
max-size: "50m"
max-file: "3"
labels: "kibana"

 

Elasticsearch data persistence:

As for most of development tasks I do not use persistent data if you would like to have this for Elasticsearch cluster you will have to change the following line in compose file by specyfing where to store the data

volumes:
# - ${PWD}/elasticsearch/data:/usr/share/elasticsearch/data

 

Logstash configuration:

By default logstash will use demo-logstash.conf which is configured just with beats input and some filtering applied. Once completed data will be sent to elasticsearch. There are more logstash ready config files under ./logstash folder so feel free to explore and possibly use.

 

 

If you would have any comments – leave them behind as I’m interested on your approach as well 😀

 

0

Ansible – IP addresses of all nodes in group

I have been searching this for a while especially when I was setting up GlusterFS. Challenge of getting all IPs of hosts within my group of ansible.

Maybe not the prettiest and elegant solution however does perfectly what is it expected from. Sets variable of IP addresses ( in my example I’m using eth0 address of IPv4 ) and join them into comma seperated result.

 

 

If you have a better approach – please leave a comment as I’m quite interested to read how you tackle this challenge 🙂

 

0

Atom cheatsheet

atom-logoIf you are like me and appreciate tools which enables you to work with highlighting multiple standards syntax and as well enable you to be quick and efficient then I recommend using atom.io

And since we want to be as fast as possible below you can find cheatsheet that I have came across of.

 

 

Screenshot 2016-03-01 21.13.14