IP address management with PHPipam and docker

Recently I have came across need of having IP address management tool. So I have looked at several options and decided that the best one was phpIPAM for 2 main reasons:

  • API
  • Overlapping subnets

Also the look and feel gives me positive feeling about the product and the fact there is a lot of development being done on github.


Using docker to run the application

So I have decided to prepare fully operational docker solution to support this application. Learned on previous mistakes github repository to which I will be referring you to have been accordingly tagged so if any changes occur – you will always be able to follow this post directions.




I would like to avoid duplication of information. Therefore I will just highlight one of possible installation options as rest is mentioned on docker hub and on github.


We start of with cloning our repository

git clone https://github.com/RafPe/docker-phpipam.git


Once thats done  we can checkout specific tag ( tag associated with content of this post )

git checkout -t v1.0.1


and then we have all components needed to run the last command

docker-compose up -d

which in return gives the following output



And off you go ahead with testing. Here couple of points are worth of mentioning:

  • For production run use database backend which has persistent storage – as in this form DB has no persistent storage
  • Consider using SSL


Application has a lot of PROs and in my opinion is really worth of looking into if your management tools needs some automation!



ZFS – creating pool from disks CentOs 7

Is it storage system time ?

So todays’ post will be short about creating ZFS pool on CentOs 7. This is logical follow up from previous post where I covered build out of new server. So what i have decided on is software RAID-1 for OS system using LVM.

Now for the data disk I have 3x4TB disks. And after looking around I made decision to use ZFS. Why ZFS ? Its reliable ( worked with systems based on it before ) and its really fast if you do a deep dive and configure it up to your needs.  As I would like to avoid duplication of posts you can find install guidelines in here on ZFS wiki.


For some of ppl ( like me 🙂 ) it’s handy to drop an eye on documentation so you know what you are dealing with. This can be good entry point before we continue and I will most probably refer you to RT*M 🙂 couple of times along the way. Documentation for administering ZFS is here


Which drives do we use ?

So let’s start by checking our available disks

[[email protected] ~]# fdisk -l /dev/sd?


Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/sde: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/sdf: 240.1 GB, 240057409536 bytes, 468862128 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xdb1d2969

   Device Boot      Start         End      Blocks   Id  System
[[email protected] ~]#


Although here it might be worth to look into assign human readable alias details to your drives. In single host scenario it might not be so useful. But when you get into working with enterprise systems in production where for obvious reasons 🙂 you have more than one server it becomes really handy.
But before actually doing this on operating system I have done the prep work on the server itself



So off we go to create vdev_id.conf  /etc/zfs/vdev_id.conf

# Custom by-path mapping for large JBOD configurations
#<ID> <by-path name>
alias BAY1_DISK1 pci-0000:00:17.0-ata-1.0
alias BAY1_DISK2 pci-0000:00:17.0-ata-2.0
alias BAY0_DISK2 pci-0000:00:17.0-ata-3.0
alias BAY0_DISK1 pci-0000:00:17.0-ata-4.0
alias BAY0_DISK0 pci-0000:00:17.0-ata-5.0
# alias  xxx      pci-0000:00:17.0-ata-6.0


Once this is done we need to trigger update using the udevadm command

udevadm trigger

Now after doing the above we will be able to list the disks using our aliases


Now all its left to do is to create ZFS pool. However just to be on the safe side we can execute a dry run.

zpool create -f -n data raidz BAY0_DISK0 BAY0_DISK1 BAY0_DISK2

In the command above the following happens:

  • we request pool to be created by using zpool create
  • we indicate we would like to have a dry run by using the -n switch
  • data is our pool name
  • RaidZ is ZFS raid type which I have chosen since I have 3 disks ( would be cool to have 4 and use RaidZ2)

Result shows what would be done for our drives



For me this looks promising – lets go ahead and get our pool created for real.

zpool create -f -o ashift=12 -O atime=off -m /pools/data data raidz BAY0_DISK0 BAY0_DISK1 BAY0_DISK2

which causes:

  • -f : forces creation as ZFS suspects we have partition on those drives – but trust me – we don’t
  • ashift=12 : following recommendation of drives with 4k blocksizes ( Advanced Format Drives  – which I recommended to get familiar with)
  • atime=off : disable access time which in return gives us more performance boost. This is something you need to decide if you would be using it
  • -m : is our mount point for the pool. Directory needs to exist already
  • RAIDZ : is of course the type of RAIDZ we would be using


The reason I’m mentioning here 4K Advanced Format drive is performance. Found here is snippet from forum thread that explains what we are looking at:


Furthermore, some ZFS pool configurations are much better suited towards 4K advanced format drives.

The following ZFS pool configurations are optimal for modern 4K sector harddrives:
RAID-Z: 3, 5, 9, 17, 33 drives
RAID-Z2: 4, 6, 10, 18, 34 drives
RAID-Z3: 5, 7, 11, 19, 35 drives

The trick is simple: substract the number of parity drives and you get:
2, 4, 8, 16, 32 …

This has to do with the recordsize of 128KiB that gets divided over the number of disks. Example for a 3-disk RAID-Z writing 128KiB to the pool:
disk1: 64KiB data (part1)
disk2: 64KiB data (part2)
disk3: 64KiB parity

Each disk now gets 64KiB which is an exact multiple of 4KiB. This means it is efficient and fast. Now compare this with a non-optimal configuration of 4 disks in RAID-Z:
disk1: 42,66KiB data (part1)
disk2: 42,66KiB data (part2)
disk3: 42,66KiB data (part3)
disk4: 42,66KiB parity

Now this is ugly! It will either be downpadded to 42.5KiB or padded toward 43.00KiB, which can vary per disk. Both of these are non optimal for 4KiB sector harddrives. This is because both 42.5K and 43K are not whole multiples of 4K. It needs to be a multiple of 4K to be optimal.


So after running the command above we have our pool running



And thats more less it for now 🙂 we got our pool running and mounted as it should.


Extra resources ? Something for future ?

In later posts we will look into performance consideration within different configurations. Which will enable us to be faster based on factual decisions in configuration.

Also I have came across really useful post about ZFS which you can find below:

Install ZFS on Debian GNU/Linux


Building own datacenter – this time at home

Datacenter in home ? Why ?

So a natural question would be why for whatever reason you would like to build a “datacenter”  ( quoted on purpose ) in your piece location of home. Well for 99.9%  of people the  answer would be “I would never have one“. Well I’m in that 0.1% and the reason is simple … well its even more than one. Its passion to IT and the drive to learn & do cool stuff.

For this reason several things at the piece of my home has changed:

  • Internet connection is now upgraded to fibre of 0.5 GB
  • Public IP address space of /29
  • Internal home network secured with customised APUboard to be edge router
  • Managed switch to introduce VLANs
  • Strong wireless with ubiquiti
  • And I think the most interesting ….. the server


Networking with “learn as you go”

Since apart of just application/server/development I also try to do electronics and this subject is also interesting for me I decided that router given to me by my provider is far away from being “cool” and fully under my control. So I started off with good old desktop station. But that kicked me back to so called “router on a stick

Since I wanted to have better experience I decided to move on and by pure luck I found this board  . And since then I already have 3 of them. How come ? Well they just connect much more than just networking. I can use them to learn linux kernel patching skills / I can use that board to connect world of software into world of electronic devices made by me and what else …. ohhh yes … and its my gigabit ethernet port (3x of them ) edge router which in return allows me to learn all tricks about networking/routing/vlans/troubleshooting 🙂

If that would not be enough I harvested my old dell laptop and plugged in Erricson Mobile Modem (3G) which now gives me alternative internet in case of failure 🙂  wow + wow + wow 🙂

So here is how one looks like without enclosure



And there it is 🙂 If you would have any questions about this small devil – just let me know 🙂 I will be happy to try provide you with more answers.


That was fun – but where is the server ?

So the whole point of here would be having a server which I strongly believe is not about “how much did it cost ?” but all about “what can you learn on it?“. If you already see this difference then ur one step ahead of others most probably.

Now at this point I will not be pretending that I know hardware really well … I don’t 🙂 and thats why good friend of mine with super uber eXperience has helped me to put together a kit list which turned out to be a great server for learning purposes.  Below you can see table of what we have concluded to be best :

Type Product Comments Link
Motherboard Asus Q170M2 Chosen of 2 build in ethernet ports https://www.asus.com/Motherboards/Q170M2/
PSU Corsair VS550 – 550W To have enough of spare power
Enclosure Antec ISK 600m Just cause it looks cool 🙂
HDD ( data ) HGST 4TB 3.5″ 7200RPM 3x of them – RAID5 – used for data
HDD ( os ) HGST 500GB 2.5″ 7200RPM 2x of them – RAID1 – used for OS
Processor Intel I7 6700 To get the most out of it
Memory Hyper Fusion [email protected] 16GB 4xthem – to max out the board. Max the fun

Now this is what we call server for fun. At this stage you would ask what will be running on that box …. well KVM for virtualisation and openVswitch to play around with SDN.

So was it scary to put it all together ?

Ohhh hell it was 🙂 I felt like getting really fragile lego pieces and the fact of me being so excited didn’t really help 🙂 So I’m attaching couple of better photos during the build out up till the first boot 🙂 enjoy!


2016-07-05 195117 2016-07-06 073927 2016-07-06 080504 2016-07-06 081813 2016-07-06 083100 2016-07-06 090849 20160706_081212 20160706_102301



Thats it for now folks 🙂

Hope you enjoyed this short adventure. We will be using those toys mentioned in this post quite soon and will definitely have more fun! So stay tuned!


DevOpsdays 2016 Amsterdam – Videos are here

If you have missed for whatever reason DevOps days in Amsterdam this year – then you can watch all published videos on vimeo channel! Just head out and go to HERE

Some of my favorites :

DevOpsdays Amsterdam 2016 Day 1 – Adam Jacob from [email protected] on Vimeo.

DevOpsdays Amsterdam 2016 Day 1 – Avishai Ish-Shalom from [email protected] on Vimeo.

DevOpsdays Amsterdam 2016 Day 1 – Daniël van Gils from [email protected] on Vimeo.


Hope you will enjoy as well!



Git – Visualize your repository

Working with any kind of version control system in today’s world of IT should not be even a question. One may remain which one 🙂 I personally use Github / BitBucket and for small factor use Gogs ( the last one itself deserves post of its own … but thats for future 🙂 )

Now once you already use versioning system another interesting discussion/challenge can occur and that is “to branch or not to branch“. Since I know that we could write up complete post here on this subject I will just stick to my personal opinion at the moment of writing this blog – which is “yep – branch 🙂

Ok – that was easy. So now we have reached the stage where we have our “code” which we version and if we do features we even branch – and now how can we visualize and nicely cherry pick changes we are interested in / browse history and do tags ?

By default GIT offers you to use “git log” command. While browsing internet for some cool approach how to do that I came across the following post which showed how to do visualizations ( at least basic one ) .

To make this easy ( if the post would for some reason be not there ) I have made a gist out of it and its below


And I think I would just stop there if not the fact that reading a bit more I came across really great tool called ungit .All it takes to install it is

npm install -g ungit


Once installed from console just invoke it



and off you go … In theory this is nothing more than just an UI for your GIT repository – but look at this already great view on one of my repos



Now the moment I saw this I already thought – “Ok – I’m thinking this will be one of my core tools”. Now I was not wrong. I could nicely expand any of my commits and get access to options further down – and of course details of every of those commits



For me this makes my everyday work now so much easier 🙂 and a bit more cool ! If you are using some different tools – just leave a comment and share your opinion 🙂