0

Kubernetes with Calico using kind

Recently after analysing requirements for application that I manage I realised I’m in need of a way to secure communication within my cluster – so in a nutshell is not an open wilderness.

While looking at several alternatives one was very appealing especially after watching the following video….

And yes it is project Calico.

So I decided to do some more testing with it. And spin it up in a locally running cluster. To have some more fun this time – there are more nodes 🙂

The difference in the below config is that we disable the default CNI.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  podSubnet: "10.240.0.0/16"
  disableDefaultCNI: true
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true,zone=cookie,region=oo-space-1"
  extraPortMappings:
  - containerPort: 30080
    hostPort: 88
    protocol: TCP
  - containerPort: 30443
    hostPort: 444
    protocol: TCP
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=alpha,region=eu-west-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=alpha,region=eu-west-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=beta,region=eu-west-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=beta,region=eu-west-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=gamma,region=eu-centra
l-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=gamma,region=eu-central-1"
- role: worker
  kubeadmConfigPatches:
  - |
    kind: JoinConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "zone=gamma,region=eu-central-1"

Once the cluster is up and running I used kapp to deploy Calico by issuing the following command:

kapp deploy -a calico -f <(curl https://docs.projectcalico.org/v3.17/manifests/calico.yaml)

Shortly after the nodes applied configuration change Calico was running on all nodes

That gets you going right away! But in order to really understand now the power you have I can highly recommend looking at example networkPolicies

Once you have done that there is also a great tool to validate not only NetworkPolicies but your kubernetes cluster configuration in general called sonobuoy

sonobuoy run --e2e-focus "NetworkPolicy" --e2e-skip ""

Happy securing of your k8s cluster!

0

“Help me Cloudflare .. you are my only hope!”

With this catchy post title I would like to start series of short technical blog posts how Cloudflare solutions can help out in solving challenges in our IT world ( at least the ones I came across ).

If you have not heard the name before then go ahead and check cloudflare.com and to find answer what Cloudflare is exactly look at their blog post 🙂 Before I start I would like to also let you know that this website does run on Cloudflare 🙂 but not all scenarios covered will be touching this property 😛 ( for obvious reasons )

My plan for the coming week ( or two ) would be to show you test case scenarios of the following :

 

With this cloud swiss army knife tool we should be able to build several case scenarios where we will see how using them can address our challenges. Now what is really cool about all of those – it is completely API driven … which means we would also get a chance to play around with “no-gui” … so the essence we all engineers like so much 🙂

 

Subscribe to not miss out on posts coming your way with all the goodies!

 

If there are some use case scenarios you would have and would like to see please leave your ideas in comments ( BTW – comments are moderated 🙂 )

0

GPG secured passwords in git using pass

It might happen that for your working environment you need to store passwords securely. Nowadays many people is using ‘cloud’ solutions – but as you do well know cloud is nothing else than ‘someone’s else computer’ 😉 . Having that said that limits options you have available. As this is point of preference I will try not to get into discussion of ‘the best solution’ but will just show you what I have been using and what I really liked a lot.

Solution is called pass and is available on the website https://www.passwordstore.org/

So let’s go ahead and install this on our machine – installation steps are nicely outlined on the product page so here I will just focus on CentOs

sudo yum install pass

As you might have seen from documentation you will need your GPG key(s) – for this demo I have created dummy one

[[email protected] ~]# gpg --list-keys
/root/.gnupg/pubring.gpg
------------------------
pub   2048R/5CBDFF98 2016-10-30
uid                  RafPe <[email protected]>
sub   2048R/B3B34661 2016-10-30

[[email protected] ~]#

 

Let’s go ahead and initialise our pass with GPG key I have created.

[[email protected] ~]# pass init 5CBDFF98
mkdir: created directory ‘/root/.password-store/’
Password store initialized for 5CBDFF98

 

Once the above is completed we can start adding passwords to our safe – simply by issuing

[[email protected] ~]# pass insert Business/serviceA/systemA
mkdir: created directory ‘/root/.password-store/Business’
mkdir: created directory ‘/root/.password-store/Business/serviceA’

 

Listing password then becomes really intuitive

[[email protected] ~]# pass ls
Password Store
└── Business
    └── serviceA
        ├── systemA
        └── systemB

 

To recover password we will just call the tree value

[[email protected] ~]# pass Business/serviceA/systemA

Now we will be asked for our GPG passphrase key in order to retrieve it.

 

 

Here we would now would like to make our password safe more reliable by using GIT to store our secrets. I’m using Gogs (GoGitAsService) which is a lightweight version available.

By issuing the following commmands we get our pass to store secrets in git :

Initialize

# Initialize 
[[email protected] ~]# pass git init

Add remote repository ( here you would need to adjust your remote repository to match – I’m using local docker instance )

[[email protected] ~]# pass git remote add origin http://192.168.178.21:10080/rafpe/passwords.git

Commit all changes

[[email protected] ~]# pass git push -u --all
Username for 'http://192.168.178.21:10080': rafpe
Password for 'http://[email protected]:10080':
Counting objects: 7, done.
Compressing objects: 100% (5/5), done.
Writing objects: 100% (7/7), 1.05 KiB | 0 bytes/s, done.
Total 7 (delta 0), reused 0 (delta 0)
To http://192.168.178.21:10080/rafpe/passwords.git
 * [new branch]      master -> master
Branch master set up to track remote branch master from origin.
[[email protected] ~]#

 

Once thats done we can take a peak on our repo which now has encrypted passwords for our specified items.

 

rafpe_passwords_-_gogs__go_git_service

 

From now on whenever I would be making changes I can just push them nicely to GIT and I have everything under control! Documentation has a lot to offer so be sure to check it – more detailed https://git.zx2c4.com/password-store/about/

 

I personally think the product is good – especially in environments where you should not store passwords in ‘clouds’ due to security constraints which may apply.

1

Redhat 7 – LDAP authentication using Ansible

Hey! Recently along with Sanderv32 we have been trying to get LDAP authentication working on Redhat machines. I must admit that we have spent some quite looking for more structured and decent information how to get this working. However up to our surprise information were completely inaccurate or outdated.

So without big delays we have decided to tackle this challenge using Ansible. Of course first attempts were just to get the idea working. As we were moving our playbook were growing to reach stage at which we could deploy LDAP authentication mechanism to all of our RedHat 7 systems

Below is the output of the runbook being used:

    - name: "LDAP Authentication | Install the required packages"
      yum: >
        name="{{item}}"
        state=present
      with_items:
        - "nss-pam-ldapd"
        - "oddjob"
        - "oddjob-mkhomedir"
      tags:
        - "ldap"
        - "packages"
        - "packages_ldap"

    - name: "LDAP Authentication | Ensure services are running"
      service:
          name={{item}}
          enabled=yes
          state=started
      with_items:
        - "nscd"
        - "nslcd"
        - "oddjobd"
      register: services_ldap
      tags:
        - "ldap"
        - "services_ldap"

    - name: "Debug | Display results"
      debug: msg="{{services_ldap.results}}"
      tags:
        - "ldap"

    - name: "LDAP Authentication | Enable LDAP PAM modules"
      command: "authconfig --enableldap --enableldapauth --enablemkhomedir --update"
      tags:
        - "ldap"

    - name: "LDAP Authentication | Adding configuration templates"
      template: >
        src="templates/{{item}}.j2"
        dest="/etc/{{item}}"
      with_items:
        - "nslcd.conf"
      tags:
        - "ldap"
        - "repository"
        - "repository_ldap"
      notify:
        - restart services ldap

And associated handler

---
  - name: "restart services ldap"
    service: >
      name="{{item.name}}" 
      state=restarted
    with_items: services_ldap.results
    tags:
      - "ldap"
      - "services_ldap"

 

In the above I have highlighted the part which we use to template NLSCD config file. The file contents are completely overwritten so make sure you adjust it to your needs.

This template has been used to connect to Active Directory with dedicated bind user and modified pagesize ( so our results are not trimmed )

# {{ ansible_managed }}
uid nslcd
gid ldap

uri {{ ldap.uri }}
base {{ ldap.basedn }}
binddn {{ ldap.binduser }}
bindpw {{ ldap.binduserpw }}
scope sub
tls_reqcert allow

pagesize 10000
referrals off
idle_timelimit 800
filter passwd (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))
map    passwd uid              sAMAccountName
map    passwd homeDirectory    unixHomeDirectory
map    passwd gecos            displayName
map    passwd loginShell       "/bin/bash"
filter shadow (&(objectClass=user)(!(objectClass=computer))(uidNumber=*)(unixHomeDirectory=*))
map    shadow uid              sAMAccountName
map    shadow shadowLastChange pwdLastSet
filter group  (objectClass=group)


ssl no
tls_cacertdir /etc/openldap/cacerts

 

 

Thats it folks! If it would not work with you please leave some comments as this is used to make sure we have means of using LDAP auth on Linux boxes

 

0

MySql SSL – require client certificate for user

When working with MySql database where you have setup encryption following one of many guides on internet you then have choice between just requires SSL to be used or that the client also has certificate. I followed the complete guide from mysql dev which allowed me to quickly get the certificates and SSL setup for my database.

Then depending on your choice you can create users using snippets below :

1

x509Certificate – System.Security.Cryptography.CryptographicException “Object was not found”

Hey ,

So recently I have been working with JSON web Tokens authentication and wanted to make extra step with security. I decided to sign my tokens with certificates.

So without any further delays I have happily placed certificate within my storage location ( for sake of this post lets say it was local filesystem ) and created simple method to create my object from byte array of that certificate and my password.

byte[] binaryData = new byte[1];
// ... Removed for code visibility - binaryData contains raw certificate byte array 

var cert          = new X509Certificate2(binaryData, password);

The problem :

However when I have tried to invoke ctor on X509Certificate2 passing my raw array of certificate bytes I have received nasty error saying :

System.Security.Cryptography.CryptographicException
Object was not found.
at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
at System.Security.Cryptography.X509Certificates.X509Utils._LoadCertFromBlob(Byte[] rawData, IntPtr password, UInt32 dwFlags, Boolean persistKeySet, SafeCertContextHandle& pCertCtx)
at System.Security.Cryptography.X509Certificates.X509Certificate.LoadCertificateFromBlob(Byte[] rawData, Object password, X509KeyStorageFlags keyStorageFlags)
at System.Security.Cryptography.X509Certificates.X509Certificate2..ctor(Byte[] rawData, String password)
//my code here

 

Tackling the challenge:

In this instance solution to the problem should be understanding whats going on in this instance.

To give you more details same problem occured on my local development environment and my Azure designated webApp.

My local website have dedicated application pool with specified domain user which app pool uses as identity.

It appears that that even though I was loading the certificate from byte[] the underlying Windows Cryptographic Service provider tried to use user store and since my application pool account profile was not available a cryotographic context was not available.

So initially seems like enabling to Load User Profile to true solves the problem. But wait …. ? Does it really ?

What happens then when you change that setting ? Well ApplicationPool is calling LoadProfile and all related implications of doing that follows.This of course includes possible security vulnerabilities / performance etc.

Other approach:

* this will also work in Azure WebApp *

X509Certificate2 ctor has extra flags ( X509KeyStorageFlags ) that can be used. If you investgate them you will notice one particklary interesting:

MachineKeySet – the key is written to a folder owned by the machine.

var cert = new X509Certificate2(bytes, password, X509KeyStorageFlags.MachineKeySet);

More info avaliable under link to a great post that discuss this in details

 

Good practice:

Its good to cleanup after yourself. If you have read the aforementioned blog you will find more info about temp files left behind when using byte[] within X509Certificate ctor.

So I have adapted method mentioned then and now use :

var file = Path.Combine(Path.GetTempPath(), "rafpe-" + Guid.NewGuid());
try
{
    File.WriteAllBytes(file, bytes);
    return new X509Certificate2(file,X509KeyStorageFlags.MachineKeySet);

}
finally
{
    File.Delete(file);
}

 

Happy coding 😀