1

Redhat 7 – LDAP authentication using Ansible

Hey! Recently along with Sanderv32 we have been trying to get LDAP authentication working on Redhat machines. I must admit that we have spent some quite looking for more structured and decent information how to get this working. However up to our surprise information were completely inaccurate or outdated.

So without big delays we have decided to tackle this challenge using Ansible. Of course first attempts were just to get the idea working. As we were moving our playbook were growing to reach stage at which we could deploy LDAP authentication mechanism to all of our RedHat 7 systems

Below is the output of the runbook being used:

    - name: "LDAP Authentication | Install the required packages"
      yum: >
        name="{{item}}"
        state=present
      with_items:
        - "nss-pam-ldapd"
        - "oddjob"
        - "oddjob-mkhomedir"
      tags:
        - "ldap"
        - "packages"
        - "packages_ldap"

    - name: "LDAP Authentication | Ensure services are running"
      service:
          name={{item}}
          enabled=yes
          state=started
      with_items:
        - "nscd"
        - "nslcd"
        - "oddjobd"
      register: services_ldap
      tags:
        - "ldap"
        - "services_ldap"

    - name: "Debug | Display results"
      debug: msg="{{services_ldap.results}}"
      tags:
        - "ldap"

    - name: "LDAP Authentication | Enable LDAP PAM modules"
      command: "authconfig --enableldap --enableldapauth --enablemkhomedir --update"
      tags:
        - "ldap"

    - name: "LDAP Authentication | Adding configuration templates"
      template: >
        src="templates/{{item}}.j2"
        dest="/etc/{{item}}"
      with_items:
        - "nslcd.conf"
      tags:
        - "ldap"
        - "repository"
        - "repository_ldap"
      notify:
        - restart services ldap

And associated handler

---
  - name: "restart services ldap"
    service: >
      name="{{item.name}}" 
      state=restarted
    with_items: services_ldap.results
    tags:
      - "ldap"
      - "services_ldap"

 

In the above I have highlighted the part which we use to template NLSCD config file. The file contents are completely overwritten so make sure you adjust it to your needs.

This template has been used to connect to Active Directory with dedicated bind user and modified pagesize ( so our results are not trimmed )

# {{ ansible_managed }}
uid nslcd
gid ldap

uri {{ ldap.uri }}
base {{ ldap.basedn }}
binddn {{ ldap.binduser }}
bindpw {{ ldap.binduserpw }}
scope sub
tls_reqcert allow

pagesize 10000
referrals off
idle_timelimit 800
filter passwd (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))
map    passwd uid              sAMAccountName
map    passwd homeDirectory    unixHomeDirectory
map    passwd gecos            displayName
map    passwd loginShell       "/bin/bash"
filter shadow (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))
map    shadow uid              sAMAccountName
map    shadow shadowLastChange pwdLastSet
filter group  (objectClass=group)


ssl no
tls_cacertdir /etc/openldap/cacerts

 

 

Thats it folks! If it would not work with you please leave some comments as this is used to make sure we have means of using LDAP auth on Linux boxes

 

0

ChatOps using Hubot – Zabbix maintanance

 

 


logo_github

 

This post is suplement to GitHub repo available under https://github.com/RafPe/hubot-zabbix-scripts

 


 

 

So finally day has come when I can write about my recent involvement in automating 🙂 this time with use of hubot ( in this role favorite Bender ) and good Rocket.Chat  .

 

Simple idea:

If we need to do it once – lets automate it as for sure someone else will need to use it also at least once

 

And in most cases its true 🙂 So one day I just woke up quite early. Really too early to go to work already 🙂 and too late to get really good sleep still. So I got the thing which we all think in the morning ….. yezzzz coffee 🙂 And then thought about the things that ppl around me have been doing manually for quite a while :/

The challenge which came out of that short moment of thinking was : “setting zabbix server maintanance with hubot ( bender ) “

 

Getting pieces together:

Now I really liked that idea. It was around 6AM in the morning , my coffee was half way through so I geared up and was ready when opened my laptop. Now what was really challenging here is the fact I have never programmed in Coffee script nor in Python and those 2 main components are used to bake this solution. However at the end of the day its only different gramma for getting things done 🙂

I decided not to reinvent the wheel and looked at things that already work. Since at the moment I have been automating a lot with Ansible I looked at their Github page with extra modules.

And that was exactly what I needed. Then I just went ahead and downloaded the hubot – following nice and simple documentation. Based on the info there getting coffeee script to do exactly what I need was just a matter of minutes 🙂 ( at least I hoped so )

 

So this is a proxy ?

Exactly. Coffee script in hubot makes sure we respond to properly set regex values which corresponds to commands given to our hubot. From there we execute python script.

So I have placed biggest efforts on getting the Python script running. I googled around and managed to get it running with arguments. Which in return opened doors to properly proxy from Coffee script.

 

The final version of python script ( final per write up of this post ) has the following syntax

python zbx-maint.py

usage: zbx-maint.py [-h] -u USER -p PASSWORD [-t TARGET] [-s SERVER] -a ACTION
                    [-l LENGTH] [-d DESC] [-r REQUESTOR] [-i ID]

 -u USER      : used to connect to zabbix - needs perm to create/delete maintanance
 -p PASSWORD  : password for the user above
 -t TARGET    : host/groups to create maintanance on
 -s SERVER    : URL of the zabbix server
 -a ACTION    : del or set
 -l LENGTH    : Number of minutes to have maintanance for
 -d DESC      : Additonal description added to maintanance
 -r REQUESTOR : Used to pass who has requested action
 -i ID        : Name of maintanance - used for deletion

 

What about security ?

All passwords and links used within the hubot script are passed using environment variables. For proper control of processes and isolation I have been using here supervisorD ( which is great tool to do this ).

 

HUBOT_ZBX_USER      : user accessing zabbix
HUBOT_ZBX_PW        : password for the user
HUBOT_ZBX_URL       : zabbix server URL
HUBOT_ZBX_PYMAINT   : full path to zbx-maint.py script (used by coffee script)

 

Bender in action:

So without any further delay this is how it looks in action ….

 

hubot_zbx_maint_v1

 

 

 

Being considered:

I’m still looking for other people feedback to see what can be done better. Most likely I will be publishing some more of zabbix automations to enrich chatops and make life more interesting 🙂

 

 

3

IP address management with PHPipam and docker

Recently I have came across need of having IP address management tool. So I have looked at several options and decided that the best one was phpIPAM for 2 main reasons:

  • API
  • Overlapping subnets

Also the look and feel gives me positive feeling about the product and the fact there is a lot of development being done on github.

phpipam_mainconsole

Using docker to run the application

So I have decided to prepare fully operational docker solution to support this application. Learned on previous mistakes github repository to which I will be referring you to have been accordingly tagged so if any changes occur – you will always be able to follow this post directions.

 

RafPe_docker-phpipam__phpIPam_-_IP_address_management_in_Docker_container_?

 

I would like to avoid duplication of information. Therefore I will just highlight one of possible installation options as rest is mentioned on docker hub and on github.

 

We start of with cloning our repository

git clone https://github.com/RafPe/docker-phpipam.git

 

Once thats done  we can checkout specific tag ( tag associated with content of this post )

git checkout -t v1.0.1

 

and then we have all components needed to run the last command

docker-compose up -d

which in return gives the following output

phpipam_compose_running

 

And off you go ahead with testing. Here couple of points are worth of mentioning:

  • For production run use database backend which has persistent storage – as in this form DB has no persistent storage
  • Consider using SSL

 

Application has a lot of PROs and in my opinion is really worth of looking into if your management tools needs some automation!

 

0

ZFS – creating pool from disks CentOs 7

Is it storage system time ?

So todays’ post will be short about creating ZFS pool on CentOs 7. This is logical follow up from previous post where I covered build out of new server. So what i have decided on is software RAID-1 for OS system using LVM.

Now for the data disk I have 3x4TB disks. And after looking around I made decision to use ZFS. Why ZFS ? Its reliable ( worked with systems based on it before ) and its really fast if you do a deep dive and configure it up to your needs.  As I would like to avoid duplication of posts you can find install guidelines in here on ZFS wiki.

 

For some of ppl ( like me 🙂 ) it’s handy to drop an eye on documentation so you know what you are dealing with. This can be good entry point before we continue and I will most probably refer you to RT*M 🙂 couple of times along the way. Documentation for administering ZFS is here

 

Which drives do we use ?

So let’s start by checking our available disks

[[email protected] ~]# fdisk -l /dev/sd?

### OS DISKS REMOVED FOR VISIBILITY ### 

Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sde: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdf: 240.1 GB, 240057409536 bytes, 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xdb1d2969

   Device Boot      Start         End      Blocks   Id  System
[[email protected] ~]#

 

Although here it might be worth to look into assign human readable alias details to your drives. In single host scenario it might not be so useful. But when you get into working with enterprise systems in production where for obvious reasons 🙂 you have more than one server it becomes really handy.
But before actually doing this on operating system I have done the prep work on the server itself

rsz_2016-07-23_201702

 

So off we go to create vdev_id.conf  /etc/zfs/vdev_id.conf

#
# Custom by-path mapping for large JBOD configurations
#
#<ID> <by-path name>
alias BAY1_DISK1 pci-0000:00:17.0-ata-1.0
alias BAY1_DISK2 pci-0000:00:17.0-ata-2.0
alias BAY0_DISK2 pci-0000:00:17.0-ata-3.0
alias BAY0_DISK1 pci-0000:00:17.0-ata-4.0
alias BAY0_DISK0 pci-0000:00:17.0-ata-5.0
# alias  xxx      pci-0000:00:17.0-ata-6.0

 

Once this is done we need to trigger update using the udevadm command

udevadm trigger

Now after doing the above we will be able to list the disks using our aliases

listdevbyvdev

Now all its left to do is to create ZFS pool. However just to be on the safe side we can execute a dry run.

zpool create -f -n data raidz BAY0_DISK0 BAY0_DISK1 BAY0_DISK2

In the command above the following happens:

  • we request pool to be created by using zpool create
  • we indicate we would like to have a dry run by using the -n switch
  • data is our pool name
  • RaidZ is ZFS raid type which I have chosen since I have 3 disks ( would be cool to have 4 and use RaidZ2)

Result shows what would be done for our drives

zfs_dry_run_pool_creation

 

For me this looks promising – lets go ahead and get our pool created for real.

zpool create -f -o ashift=12 -O atime=off -m /pools/data data raidz BAY0_DISK0 BAY0_DISK1 BAY0_DISK2

which causes:

  • -f : forces creation as ZFS suspects we have partition on those drives – but trust me – we don’t
  • ashift=12 : following recommendation of drives with 4k blocksizes ( Advanced Format Drives  – which I recommended to get familiar with)
  • atime=off : disable access time which in return gives us more performance boost. This is something you need to decide if you would be using it
  • -m : is our mount point for the pool. Directory needs to exist already
  • RAIDZ : is of course the type of RAIDZ we would be using

 

The reason I’m mentioning here 4K Advanced Format drive is performance. Found here is snippet from forum thread that explains what we are looking at:

 


Furthermore, some ZFS pool configurations are much better suited towards 4K advanced format drives.

The following ZFS pool configurations are optimal for modern 4K sector harddrives:
RAID-Z: 3, 5, 9, 17, 33 drives
RAID-Z2: 4, 6, 10, 18, 34 drives
RAID-Z3: 5, 7, 11, 19, 35 drives

The trick is simple: substract the number of parity drives and you get:
2, 4, 8, 16, 32 …

This has to do with the recordsize of 128KiB that gets divided over the number of disks. Example for a 3-disk RAID-Z writing 128KiB to the pool:
disk1: 64KiB data (part1)
disk2: 64KiB data (part2)
disk3: 64KiB parity

Each disk now gets 64KiB which is an exact multiple of 4KiB. This means it is efficient and fast. Now compare this with a non-optimal configuration of 4 disks in RAID-Z:
disk1: 42,66KiB data (part1)
disk2: 42,66KiB data (part2)
disk3: 42,66KiB data (part3)
disk4: 42,66KiB parity

Now this is ugly! It will either be downpadded to 42.5KiB or padded toward 43.00KiB, which can vary per disk. Both of these are non optimal for 4KiB sector harddrives. This is because both 42.5K and 43K are not whole multiples of 4K. It needs to be a multiple of 4K to be optimal.


 

So after running the command above we have our pool running

zfs_pool_created

 

And thats more less it for now 🙂 we got our pool running and mounted as it should.

 

Extra resources ? Something for future ?

In later posts we will look into performance consideration within different configurations. Which will enable us to be faster based on factual decisions in configuration.

Also I have came across really useful post about ZFS which you can find below:

Install ZFS on Debian GNU/Linux

3

Building own datacenter – this time at home

Datacenter in home ? Why ?

So a natural question would be why for whatever reason you would like to build a “datacenter”  ( quoted on purpose ) in your piece location of home. Well for 99.9%  of people the  answer would be “I would never have one“. Well I’m in that 0.1% and the reason is simple … well its even more than one. Its passion to IT and the drive to learn & do cool stuff.

For this reason several things at the piece of my home has changed:

  • Internet connection is now upgraded to fibre of 0.5 GB
  • Public IP address space of /29
  • Internal home network secured with customised APUboard to be edge router
  • Managed switch to introduce VLANs
  • Strong wireless with ubiquiti
  • And I think the most interesting ….. the server

 

Networking with “learn as you go”

Since apart of just application/server/development I also try to do electronics and this subject is also interesting for me I decided that router given to me by my provider is far away from being “cool” and fully under my control. So I started off with good old desktop station. But that kicked me back to so called “router on a stick

Since I wanted to have better experience I decided to move on and by pure luck I found this board  . And since then I already have 3 of them. How come ? Well they just connect much more than just networking. I can use them to learn linux kernel patching skills / I can use that board to connect world of software into world of electronic devices made by me and what else …. ohhh yes … and its my gigabit ethernet port (3x of them ) edge router which in return allows me to learn all tricks about networking/routing/vlans/troubleshooting 🙂

If that would not be enough I harvested my old dell laptop and plugged in Erricson Mobile Modem (3G) which now gives me alternative internet in case of failure 🙂  wow + wow + wow 🙂

So here is how one looks like without enclosure

IMG-20160712-WA0008

 

And there it is 🙂 If you would have any questions about this small devil – just let me know 🙂 I will be happy to try provide you with more answers.

 

That was fun – but where is the server ?

So the whole point of here would be having a server which I strongly believe is not about “how much did it cost ?” but all about “what can you learn on it?“. If you already see this difference then ur one step ahead of others most probably.

Now at this point I will not be pretending that I know hardware really well … I don’t 🙂 and thats why good friend of mine with super uber eXperience has helped me to put together a kit list which turned out to be a great server for learning purposes.  Below you can see table of what we have concluded to be best :

Type Product Comments Link
Motherboard Asus Q170M2 Chosen of 2 build in ethernet ports https://www.asus.com/Motherboards/Q170M2/
PSU Corsair VS550 – 550W To have enough of spare power
Enclosure Antec ISK 600m Just cause it looks cool 🙂
HDD ( data ) HGST 4TB 3.5″ 7200RPM 3x of them – RAID5 – used for data
HDD ( os ) HGST 500GB 2.5″ 7200RPM 2x of them – RAID1 – used for OS
Processor Intel I7 6700 To get the most out of it
Memory Hyper Fusion [email protected] 16GB 4xthem – to max out the board. Max the fun

Now this is what we call server for fun. At this stage you would ask what will be running on that box …. well KVM for virtualisation and openVswitch to play around with SDN.

So was it scary to put it all together ?

Ohhh hell it was 🙂 I felt like getting really fragile lego pieces and the fact of me being so excited didn’t really help 🙂 So I’m attaching couple of better photos during the build out up till the first boot 🙂 enjoy!

 

2016-07-05 195117 2016-07-06 073927 2016-07-06 080504 2016-07-06 081813 2016-07-06 083100 2016-07-06 090849 20160706_081212 20160706_102301

 

 

Thats it for now folks 🙂

Hope you enjoyed this short adventure. We will be using those toys mentioned in this post quite soon and will definitely have more fun! So stay tuned!

0

DevOpsdays 2016 Amsterdam – Videos are here

If you have missed for whatever reason DevOps days in Amsterdam this year – then you can watch all published videos on vimeo channel! Just head out and go to HERE

Some of my favorites :

DevOpsdays Amsterdam 2016 Day 1 – Adam Jacob from [email protected] on Vimeo.

DevOpsdays Amsterdam 2016 Day 1 – Avishai Ish-Shalom from [email protected] on Vimeo.

DevOpsdays Amsterdam 2016 Day 1 – Daniël van Gils from [email protected] on Vimeo.

 

Hope you will enjoy as well!

 

0

Git – Visualize your repository

Working with any kind of version control system in today’s world of IT should not be even a question. One may remain which one 🙂 I personally use Github / BitBucket and for small factor use Gogs ( the last one itself deserves post of its own … but thats for future 🙂 )

Now once you already use versioning system another interesting discussion/challenge can occur and that is “to branch or not to branch“. Since I know that we could write up complete post here on this subject I will just stick to my personal opinion at the moment of writing this blog – which is “yep – branch 🙂

Ok – that was easy. So now we have reached the stage where we have our “code” which we version and if we do features we even branch – and now how can we visualize and nicely cherry pick changes we are interested in / browse history and do tags ?

By default GIT offers you to use “git log” command. While browsing internet for some cool approach how to do that I came across the following post which showed how to do visualizations ( at least basic one ) .

To make this easy ( if the post would for some reason be not there ) I have made a gist out of it and its below

 

And I think I would just stop there if not the fact that reading a bit more I came across really great tool called ungit .All it takes to install it is

npm install -g ungit

 

Once installed from console just invoke it

ungit

 

and off you go … In theory this is nothing more than just an UI for your GIT repository – but look at this already great view on one of my repos

ungit_main_screen

 

Now the moment I saw this I already thought – “Ok – I’m thinking this will be one of my core tools”. Now I was not wrong. I could nicely expand any of my commits and get access to options further down – and of course details of every of those commits

ungit_specific_commit

 

For me this makes my everyday work now so much easier 🙂 and a bit more cool ! If you are using some different tools – just leave a comment and share your opinion 🙂

 

 

0

AVR Library for WS2803 LED Driver IC

ledySo today I will be sharing with you one of my maybe not so recent but reliable libraries. It is for WS2803 which I think is really cool IC that allows you to control outputs with PWM using SPI to control data being sent.

 

The amount of components needed is really minimal as we only need to provide one single reference resistor which will limit the amount of current for our outputs.

 

One thing you could ask is how do you connect to it without a lot of hustle ? Well I personally for my electronical challenges use ATB Development boards manufactured by Atnel.pl. Reason for this specific set ? Well first of all it is extremely high quality packed with all items which you would even imagine you could need 😀 Take a look …

 

What you see here below is previous version and the newest and shiniest version of ATB devboard.
2016-06-25 225703

2016-06-25 225650

 

Packed like this I’m ready to challenge the WS2803 … I won’t be providing connection schematics as internet and datasheet are available all over the internet.

Library is available under here and you can just go ahead and download it. Since I have been developing this on AVR Atmega32 it is using hardware SPI defined in SPI library.

The only pins you need there is MOSI and CLK. This is all needed to drive WS2803. Since there is no magic in sending data using SPI protocol let’s dive into WS803 library.

Before you begin I think you should now that I have written my own implementation of delay function. Reason for that is in this version I haven’t used interrupts and needed to be able to control delays with variables.

static inline void delay_ms(uint16_t count) 
{
  while(count--) 
  {
      delay_ms(1);
  }
}

Once you add library files to you project you will need to first define yourself buffer. I have tendency of defining how many outputs I will have and using that definition in the rest of my program.

So you could do the following to start …

define WS2803_LED_CNT 18 

uint8_t ws_buf[ WS2803_LED_CNT ];
uint8_t * ptrBuf = ws_buf;

 

Now we are ready to start fun…. easiest is to try and light out all of LEDs ( outputs ). For this we will use function which operating on memory sets all values in our buffer to single value

ws2803_set_all( ptrBuf , 255);    
ws2803_shift_out( ptrBuf ); 
delay_ms(3000);

Easy 🙂 Well – most up to date is always the github and there you will be able to find remaining functions and some examples i.e.

  • Fade in
  • Fade out
  • Draw line
  • Light up one by one

RafPe_WS2803__Repository_with_library_designed_to_drive_ws2803_from_AVR

0

CNC your custom usb slide-in PCB

So I had troubles properly naming title of this post. So as a short explanation my passion also includes electronics 🙂 and today I felt like playing with USB and RS232 ( reason for that will be obvious in one of later posts ).

Now in order to get this working I could most probably just get USB port and solder some cables into it but I think this would totally miss the idea of fun – especially in my case.

So where do we start ? We need to make a design ? No problemos 🙂 using Eagle CAD I have created board schematic ( as in my case schematic for electronics is not needed 🙂 )

The final design after a moment looked like this :

usbslider_01

 

So that was the easy part right ? Well yes 🙂 However I do not forget that “sharing is caring” thats why you can download the project here

 

Ok so how do we get our project from digital world to our matrix 🙂 ? There are several available methods :

  • Chemical etching
  • UV
  • CNC
  • Order your PCB at professional services

 

In my experience I have already been through chemical etching and though it works nice – yeah….. at certain point you get tired of all that action you need to execute . UV I have not yet played with ( but it also includes chemical toys ). Professional services is also not an option as this project is so small and I need it kinda … now 🙂 so we are left with cool one …. CNC

2016-06-21 231039

 

 

Now with help of a bit of machine software and some PCB material I got the board ready in like 7 minutes. Machine is capable of faster operations but with PCB I would be risking damage to reallllyyyyy small drill bits ( talking from expensive experience here 🙁 ) so thats perfectly fine with me. After machine was done I was presented with the following view 🙂

IMG-20160620-WA0003

 

hey ya! That is what I wanted 🙂 However for next version I will need to ditch that outline margin as it was nice enough to connect all points of my board which for USB is not so good !

From there only a bit of soldering and some hot tube to isolate remains and we got this :

 

2016-06-21 221342

2016-06-21 222812

 

As it may not look sooooo super PRO 🙂 but this is a POC prototype to make sure when I need more and order from professional services I get exactly what I expect.

 

If you are using CNC for PCB – leave a comment – interested in your approach