1

Automating Akamai – Network lists with CLI and API

Hi,

This most likely can be first of several posts on tools and approach taken to automate tasks in Akamai. Before we look into specific toolset lets peak what is Akamai’s vision on automation

 

From what I have seen some of the features do work nicely and some of them are still in beta or alpha. We will be focusing on Akamai CLI and extending it with plugin to manage network lists. 

Akamai CLI is a tool which allows us to write plugin in most of common languages ( for me it will be Golang ) and then use it from console. Since the tool is well documented I will skip introducing it and send you off to documentation 

 

Choosing your client

Before you go ahead and write your own plugin you should decide on which client to choose ( or write your own ) which will take over communication with Akamai’s API.

For Golang Akamai have client which you can get here – however inspired by colleague of mine who wrote go-gitlab ( and not only ) I decided to make client a bit more robust and organised and came up ( as we engineers usually do 🙂 ) with alternative version.

This client can be found under https://github.com/RafPe/go-edgegrid

 

Akamai-CLI Network Lists

We start off by installing the plugin into Akamai’s CLI toolkit by running

akamai install https://github.com/RafPe/akamai-cli-netlist

which in return shows us the output similar to

 

From this point onwards we can use of all benefits of our new plugin. Just to give it a spin I will try explore just getting the lists …

Getting all network lists

 

Getting one list with all elements

 

Want more ?….

Rest of them is well documented in repository page under https://github.com/RafPe/akamai-cli-netlist  and from there I encourage you to explore the options you have for automation and let me know in comments did it work for you 🙂

 

 

More community extension

My extension is now not the only one recently created – below is the list of other ones which you can make use of already

Akamai CLI for Netstorage https://github.com/partamonov/akamai-cli-netstorage
Akamai CLI for Siteshield https://github.com/partamonov/akamai-cli-siteshield
Akamai CLI for Firewall Rules Notifications https://github.com/partamonov/akamai-cli-frn

0

PowerShell – Creating PSObject from template

When working with PowerShell I have came across really cool way to work with PSObjects. It’s being simple as creating one of the underlying object methods. But first things first – let’s create one template object

$AccessRules = New-Object PsObject
$AccessRules.PsObject.TypeNames.Insert(0, "FileSystemAccessRules")
$AccessRules | Add-Member -MemberType NoteProperty -Name subFolder -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name identity -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name rights -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name InheritanceFlags -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name accessControlType -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name preserveInheritance -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name isInherited -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name owner -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name PropagationFlags -Value ''

 

Thats really easy – now its time to simply us it as base for other created objects

$FS_TMP_AR_1 = $AccessRules.psobject.Copy()

$FS_TMP_AR_1.accessControlType = 'Állow'
$FS_TMP_AR_1.identity          = 'BUILTIN\Administrators'
$FS_TMP_AR_1.InheritanceFlags  = "ContainerInherit, ObjectInherit"
$FS_TMP_AR_1.isInherited       = 1
$FS_TMP_AR_1.owner             = "BUILTIN\Administrators"
$FS_TMP_AR_1.preserveInheritance = 1
$FS_TMP_AR_1.rights              = 'FullControl'
$FS_TMP_AR_1.subFolder           = ''
$FS_TMP_AR_1.PropagationFlags ="None"

 

And thats it – voilla 😉 the whole magic is just hidden under this line

.psobject.Copy()

 

Hope this helps – happy coding!

2

HAproxy – backend and domain management using map

This is quick write up how to use single line for easy backend mapping within HAproxy. This has been showed to me by my buddy while challenging current configuration which started to grow.

The first thing you will need to have is to create a map file. Its structure its simple – first column is what comes is , second is what comes out. So for our domain mapping we can have file with domain name and respective backend i.e.

domain.com backend_com
www.domain.com backend.com

Next is just configuration line on your front end associating domains with backends based on the host header

use_backend %[req.hdr(host),lower,map_dom(/etc/haproxy/<PATH-TO-MAP-FILE>,<DEFAULT-BACKEND>)]

 

And that is it 🙂 you have now got your self really dynamic configuration

 

0

Scaffolding application templates using Yeoman.io

When you quickly need to create a new application starting every time from scratch can be a pain in the back side 🙂 But have no fear – there is a really nice service called Yeoman available here (  http://yeoman.io/ ).

Based on community best practices it acts as “generator” of all what you need to start your new app. Since I’m using MacBook nowadays – we will go through installation on that platform.

brew install node

 

Once done we can install yeoman and for demo purposes generator for hubot

npm install -g yo generator-hubot

 

And there it is 😉 from this point onwards we just just create our apps – easily and whats most import on demand! Let’s start with something simple like Hubot

mkdir myhubot
cd myhubot
yo hubot

yeoman_hubot

 

 

 

And off it goes 😉 Now the possibilities are broader and for example starting with c# app or angular is as easy as discovering them here ( generators ) . Try it 🙂

1

Redhat 7 – LDAP authentication using Ansible

Hey! Recently along with Sanderv32 we have been trying to get LDAP authentication working on Redhat machines. I must admit that we have spent some quite looking for more structured and decent information how to get this working. However up to our surprise information were completely inaccurate or outdated.

So without big delays we have decided to tackle this challenge using Ansible. Of course first attempts were just to get the idea working. As we were moving our playbook were growing to reach stage at which we could deploy LDAP authentication mechanism to all of our RedHat 7 systems

Below is the output of the runbook being used:

    - name: "LDAP Authentication | Install the required packages"
      yum: >
        name="{{item}}"
        state=present
      with_items:
        - "nss-pam-ldapd"
        - "oddjob"
        - "oddjob-mkhomedir"
      tags:
        - "ldap"
        - "packages"
        - "packages_ldap"

    - name: "LDAP Authentication | Ensure services are running"
      service:
          name={{item}}
          enabled=yes
          state=started
      with_items:
        - "nscd"
        - "nslcd"
        - "oddjobd"
      register: services_ldap
      tags:
        - "ldap"
        - "services_ldap"

    - name: "Debug | Display results"
      debug: msg="{{services_ldap.results}}"
      tags:
        - "ldap"

    - name: "LDAP Authentication | Enable LDAP PAM modules"
      command: "authconfig --enableldap --enableldapauth --enablemkhomedir --update"
      tags:
        - "ldap"

    - name: "LDAP Authentication | Adding configuration templates"
      template: >
        src="templates/{{item}}.j2"
        dest="/etc/{{item}}"
      with_items:
        - "nslcd.conf"
      tags:
        - "ldap"
        - "repository"
        - "repository_ldap"
      notify:
        - restart services ldap

And associated handler

---
  - name: "restart services ldap"
    service: >
      name="{{item.name}}" 
      state=restarted
    with_items: services_ldap.results
    tags:
      - "ldap"
      - "services_ldap"

 

In the above I have highlighted the part which we use to template NLSCD config file. The file contents are completely overwritten so make sure you adjust it to your needs.

This template has been used to connect to Active Directory with dedicated bind user and modified pagesize ( so our results are not trimmed )

# {{ ansible_managed }}
uid nslcd
gid ldap

uri {{ ldap.uri }}
base {{ ldap.basedn }}
binddn {{ ldap.binduser }}
bindpw {{ ldap.binduserpw }}
scope sub
tls_reqcert allow

pagesize 10000
referrals off
idle_timelimit 800
filter passwd (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))
map    passwd uid              sAMAccountName
map    passwd homeDirectory    unixHomeDirectory
map    passwd gecos            displayName
map    passwd loginShell       "/bin/bash"
filter shadow (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))
map    shadow uid              sAMAccountName
map    shadow shadowLastChange pwdLastSet
filter group  (objectClass=group)


ssl no
tls_cacertdir /etc/openldap/cacerts

 

 

Thats it folks! If it would not work with you please leave some comments as this is used to make sure we have means of using LDAP auth on Linux boxes

 

0

ChatOps using Hubot – Zabbix maintanance

 

 


logo_github

 

This post is suplement to GitHub repo available under https://github.com/RafPe/hubot-zabbix-scripts

 


 

 

So finally day has come when I can write about my recent involvement in automating 🙂 this time with use of hubot ( in this role favorite Bender ) and good Rocket.Chat  .

 

Simple idea:

If we need to do it once – lets automate it as for sure someone else will need to use it also at least once

 

And in most cases its true 🙂 So one day I just woke up quite early. Really too early to go to work already 🙂 and too late to get really good sleep still. So I got the thing which we all think in the morning ….. yezzzz coffee 🙂 And then thought about the things that ppl around me have been doing manually for quite a while :/

The challenge which came out of that short moment of thinking was : “setting zabbix server maintanance with hubot ( bender ) “

 

Getting pieces together:

Now I really liked that idea. It was around 6AM in the morning , my coffee was half way through so I geared up and was ready when opened my laptop. Now what was really challenging here is the fact I have never programmed in Coffee script nor in Python and those 2 main components are used to bake this solution. However at the end of the day its only different gramma for getting things done 🙂

I decided not to reinvent the wheel and looked at things that already work. Since at the moment I have been automating a lot with Ansible I looked at their Github page with extra modules.

And that was exactly what I needed. Then I just went ahead and downloaded the hubot – following nice and simple documentation. Based on the info there getting coffeee script to do exactly what I need was just a matter of minutes 🙂 ( at least I hoped so )

 

So this is a proxy ?

Exactly. Coffee script in hubot makes sure we respond to properly set regex values which corresponds to commands given to our hubot. From there we execute python script.

So I have placed biggest efforts on getting the Python script running. I googled around and managed to get it running with arguments. Which in return opened doors to properly proxy from Coffee script.

 

The final version of python script ( final per write up of this post ) has the following syntax

python zbx-maint.py

usage: zbx-maint.py [-h] -u USER -p PASSWORD [-t TARGET] [-s SERVER] -a ACTION
                    [-l LENGTH] [-d DESC] [-r REQUESTOR] [-i ID]

 -u USER      : used to connect to zabbix - needs perm to create/delete maintanance
 -p PASSWORD  : password for the user above
 -t TARGET    : host/groups to create maintanance on
 -s SERVER    : URL of the zabbix server
 -a ACTION    : del or set
 -l LENGTH    : Number of minutes to have maintanance for
 -d DESC      : Additonal description added to maintanance
 -r REQUESTOR : Used to pass who has requested action
 -i ID        : Name of maintanance - used for deletion

 

What about security ?

All passwords and links used within the hubot script are passed using environment variables. For proper control of processes and isolation I have been using here supervisorD ( which is great tool to do this ).

 

HUBOT_ZBX_USER      : user accessing zabbix
HUBOT_ZBX_PW        : password for the user
HUBOT_ZBX_URL       : zabbix server URL
HUBOT_ZBX_PYMAINT   : full path to zbx-maint.py script (used by coffee script)

 

Bender in action:

So without any further delay this is how it looks in action ….

 

hubot_zbx_maint_v1

 

 

 

Being considered:

I’m still looking for other people feedback to see what can be done better. Most likely I will be publishing some more of zabbix automations to enrich chatops and make life more interesting 🙂

 

 

3

IP address management with PHPipam and docker

Recently I have came across need of having IP address management tool. So I have looked at several options and decided that the best one was phpIPAM for 2 main reasons:

  • API
  • Overlapping subnets

Also the look and feel gives me positive feeling about the product and the fact there is a lot of development being done on github.

phpipam_mainconsole

Using docker to run the application

So I have decided to prepare fully operational docker solution to support this application. Learned on previous mistakes github repository to which I will be referring you to have been accordingly tagged so if any changes occur – you will always be able to follow this post directions.

 

RafPe_docker-phpipam__phpIPam_-_IP_address_management_in_Docker_container_?

 

I would like to avoid duplication of information. Therefore I will just highlight one of possible installation options as rest is mentioned on docker hub and on github.

 

We start of with cloning our repository

git clone https://github.com/RafPe/docker-phpipam.git

 

Once thats done  we can checkout specific tag ( tag associated with content of this post )

git checkout -t v1.0.1

 

and then we have all components needed to run the last command

docker-compose up -d

which in return gives the following output

phpipam_compose_running

 

And off you go ahead with testing. Here couple of points are worth of mentioning:

  • For production run use database backend which has persistent storage – as in this form DB has no persistent storage
  • Consider using SSL

 

Application has a lot of PROs and in my opinion is really worth of looking into if your management tools needs some automation!

 

0

Git – Visualize your repository

Working with any kind of version control system in today’s world of IT should not be even a question. One may remain which one 🙂 I personally use Github / BitBucket and for small factor use Gogs ( the last one itself deserves post of its own … but thats for future 🙂 )

Now once you already use versioning system another interesting discussion/challenge can occur and that is “to branch or not to branch“. Since I know that we could write up complete post here on this subject I will just stick to my personal opinion at the moment of writing this blog – which is “yep – branch 🙂

Ok – that was easy. So now we have reached the stage where we have our “code” which we version and if we do features we even branch – and now how can we visualize and nicely cherry pick changes we are interested in / browse history and do tags ?

By default GIT offers you to use “git log” command. While browsing internet for some cool approach how to do that I came across the following post which showed how to do visualizations ( at least basic one ) .

To make this easy ( if the post would for some reason be not there ) I have made a gist out of it and its below

 

And I think I would just stop there if not the fact that reading a bit more I came across really great tool called ungit .All it takes to install it is

npm install -g ungit

 

Once installed from console just invoke it

ungit

 

and off you go … In theory this is nothing more than just an UI for your GIT repository – but look at this already great view on one of my repos

ungit_main_screen

 

Now the moment I saw this I already thought – “Ok – I’m thinking this will be one of my core tools”. Now I was not wrong. I could nicely expand any of my commits and get access to options further down – and of course details of every of those commits

ungit_specific_commit

 

For me this makes my everyday work now so much easier 🙂 and a bit more cool ! If you are using some different tools – just leave a comment and share your opinion 🙂

 

 

0

Ansible – Using template lookup

For some of actions / modules / roles it might be that you would like to use your Jinja template as variable. I have been looking some time for this lookup module. Therefore to save you from crawling over internet this is how could you use it

Example of Jinja template

  location ~ \.php$ {
    try_files $uri =404;
    root           /var/www/htdocs/;
    fastcgi_pass   unix:/var/run/php65-php-fpm.sock;
    fastcgi_intercept_errors        on;
    fastcgi_buffers 256 16k;
    fastcgi_max_temp_file_size 0;
    fastcgi_index  index.php;
    fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include        fastcgi_params;

    {% if env is defined and env=="testos" %}
    auth_basic           "Ninja Test ";
    auth_basic_user_file {{ nginx_basic_auth }} ;
    {% endif %}
  }

And then within our playbook we define it as following

---
- name: Testing
  hosts: localhost
  vars:
    - blabla: "Testing123"
    - testing: "{{ lookup('template', 'template.j2') }}"

  tasks:
  - debug: msg="{{testing}}"

 

Are you using this in other way ? Comment whats your approach 🙂

0

Ansible – Using dictionary to deploy pem certificates

When automating certificate deployments I wanted to have smart way of deploying them. So I went ahead and decided to use dictionaries.

For this example my variables looked more like as following :

ssl_certificates:
    domain_uno_com:
      owner: haproxy
      group: haproxy
      mode: "u=r,go="
      certificate: |
                                              -----BEGIN CERTIFICATE-----
                                              dslkajfafak234h23o4h32jkh43jqtghkjafhads;fhd89fuad9f6a8s7f6adsf
                                              < ..................... bogus info uno ...................... >
                                              yjEdslkajfafak234h23o4h32jkh43jZlcmlTaWduLCBJbmMuMR8wHQYDVQQL23
                                                -----END CERTIFICATE-----
      key: |
                                              -----BEGIN PRIVATE KEY-----
                                              dslkajfafak234h23o4h32jkh43jqtghkjafhads;fhd89fuad9f6a8s7f6adsf
                                              < ..................... bogus info uno ...................... >
                                              yjEdslkajfafak234h23o4h32jkh43jZlcmlTaWduLCBJbmMuMR8wHQYDVQQL23
                                              Edslkajfafak234h==
                                              -----END PRIVATE KEY-----
      
    domain_duo_com:
      owner: haproxy
      group: haproxy
      mode: "u=r,go="
      certificate: |
                                              -----BEGIN CERTIFICATE-----
                                              dslkajfafak234h23o4h32jkh43jqtghkjafhads;fhd89fuad9f6a8s7f6adsf
                                              < ..................... bogus info duo ...................... >
                                              yjEdslkajfafak234h23o4h32jkh43jZlcmlTaWduLCBJbmMuMR8wHQYDVQQL23
                                                -----END CERTIFICATE-----
      key: |
                                              -----BEGIN PRIVATE KEY-----
                                              dslkajfafak234h23o4h32jkh43jqtghkjafhads;fhd89fuad9f6a8s7f6adsf
                                              < ..................... bogus info duo ...................... >
                                              yjEdslkajfafak234h23o4h32jkh43jZlcmlTaWduLCBJbmMuMR8wHQYDVQQL23
                                              Edslkajfafak234h==
                                              -----END PRIVATE KEY-----

 

Once we have that within our playbook we will be using the following actions to create ourselves pem files

       - name: SSL certificates Web | Create certificate key files
         copy:
           dest: "{{web_ssl_folder}}/{{ item.key.replace('_','.') }}.pem"
           content: "{{ item.value.certificate + '\n' + item.value.key }}"
           owner: "{{ item.value.owner }}"
           group: "{{ item.value.group }}"
           mode: "{{ item.value.mode }}"
         with_dict: ssl_certificates
         no_log: true

 

Now when we run our playbook what will happen is we will get within folder defined under web_ssl_folder  new certificates called respectively domain.uno.com.pem and domain.duo.com.pem.

Of course if you add more entries you will get more created. So for you the only thing to change from here is the owner and possibly the rights ( although think twice 🙂 )