0

openSSL pb7 certificate : unable to load certificate

Recently when working with certificates I received them in pb7 format. If you just try to take them as is u might get

unable to load certificate
140735207381436:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1319:
140735207381436:error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error:tasn_dec.c:381:Type=X509_CINF
140735207381436:error:0D08303A:asn1 encoding routines:ASN1_TEMPLATE_NOEXP_D2I:nested asn1 error:tasn_dec.c:751:Field=cert_info, Type=X509
140735207381436:error:0906700D:PEM routines:PEM_ASN1_read_bio:ASN1 lib:pem_oth.c:83:

When trying to validate a certificate using openssl, this is because it is in the wrong format, whilst the certificate file visually appears to be in x.509 format, you will find it contains a far longer base64 string than x.509 certificates of the same bit length.
The format in this case is p7b (PCKS #7); to use the certificate with apache you’re going to have to convert this.

openssl pkcs7 -print_certs -in certificate.p7b -out certificate.cer

Within the resulting .cer file you will file you x.509 certificate bundled with relevant CA certificates, break these out into your relevant .crt and ca.crt files and load as normal into apache.

0

Golang app to authenticate with AWS Cognito Pool

Since started to work with AWS I sometimes hit the same problems more than one time 😉 One of those happen was when working with AWS Cognito – just needed to authenticate and get token – or just verify the user 😉 using command line.  I honestly did not want to be bothered with any complexity to get simple tokens which I planned to use in accessing other systems etc.

For this purposes I have created simple CLI ( right now with just 2 methods ) to help me out in those situations.  Usage is extremely simple you just need to have your AWS profile configured and have details of your AppClient from your user pool.

Authenticate

> [SHELL] RafPe $ go-cognito-authy --profile cloudy --region eu-central-1 auth --username rafpe --password 'Password.0ne!'  --clientID 2jxxxiuui123
{
AuthenticationResult: {
    AccessToken: "eyJraWQiOiJ0QXVBNmxtNngrYkxoSmZ",
    ExpiresIn: 3600,
    IdToken: "eyJraWQiOiJ0bHF2UElTV0pn",
    RefreshToken: "eyJjdHkiOiJKV1QiLCJlbmMiOiJBMjU2R-TpkR_uompG7fyajYeFvn-rJVC_tDO4pB3",
    TokenType: "Bearer"
},
ChallengeParameters: {}
}

which in return should give you response with tokens needed further in your adventures with AWS…. but if you would have user in state that a password needs to be changed 😉 ….

> [INSERT] RafPe $ go-cognito-authy --profile cloudy --region eu-central-1 auth --username rafpe --password 'Password.0ne!'  --clientID 2jxxxiuui123
{
ChallengeName: "NEW_PASSWORD_REQUIRED",
ChallengeParameters: {
    requiredAttributes: "[]",
    userAttributes: "{\"email_verified\":\"true\",\"email\":\"[email protected]\"}",
    USER_ID_FOR_SRP: "rafpe"
},
Session: "bCqSkLeoJR_ys...."
}

 

Administratively set new pass

With the session above and known challenge for new pass you can use it to set desired password

> [INSERT] RafPe $ go-cognito-authy --profile cloudy -region eu-central-1 admin reset-pass --username rafpe --pass-new 'Password.0ne2!' --clientID 2jxxxiuui123 --userPoolID  eu-central-1_CWNnTiR0j --session "bCqSkLeoJR_ys...."

and voilla 😉 we can now continue playing with tokens

 

 

Patches welcome

The whole solution is available on Github https://github.com/RafPe/go-cognito-authy/tree/master  and if you are missing something please create a PR 😉

0

PKI infrastructure using Hashicorp Vault

So today we will quickly go through setting up vault as our PKI backend. Capabilities of vault are much more to what is shown here as we are just touching several out of many more options from Hashicorp Vault.

Idea here will be to create root CA and then intermediate CA to provide our users/servers with certificates based on our needs. Since I already have been playing a bit with vault I prepared myself quick script. But before we go there we have a list of pre requisites need for all of this to work:

Building quickly vault server when you have a docker engine is easy as running

docker run -d --name vault -P --cap-add IPC_LOCK rafpe/docker-vault:latest server -dev-listen-address=0.0.0.0:8200 -dev

which will bring up our container. From there we need to grab token ID which we will use later for calls to our servers.

 

Export the values

export VAULT_ADDR="http://my-server-address:my-port"
export VAULT_TOKEN="my-token"

 

Once done you can grab my init script below

Be sure to modify URL for your vault server and off you go 🙂

 

To create certificate you need to create a role and then make a request for issuing one

vault write rafpe_intermediate/roles/rafpe-engineer lease_max="336h" lease="336h" key_type="rsa" key_bits="2048" allow_any_name=true

vault write rafpe_intermediate/issue/rafpe-engineer common_name="ninja.rafpe.engineer:rafpe" ttl=720h format=pem

 

This will get you started. And in one of next posts we will use this infra for our HAproxy

0

Setup for AVR development on MacOS

Hey,

So today we look into something which I really was looking for recently. In nutshell it is setup of required components which are necessary to program AVRs on our MacOS.

Here I assume you for sure already have HomeBrew installed as it will be our main point of software installation.

 

Add new tap

brew tap osx-cross/avr

Install avr-gcc

brew install avr-libc

This one will install avr-binutils and avr-gcc. The avr-gcc installation takes time as it is compiling … so make ur self your favourite drink here 😉

 

Install avrdude

brew install avrdude --with-usb

 

Avrdude error when loading MCU in Eclipse

If you have error when using Eclipse and avrdude it seems that the AVR plugin has not been updated for a while. Although this does not affect programming can be easily solved. My friend sanderv32 has created a really nice AVRdude wrapper which solves this problem once and for all!

You can find his repo here => https://github.com/sanderv32/avrdude-av

 

Happy coding!

0

GPG secured passwords in git using pass

It might happen that for your working environment you need to store passwords securely. Nowadays many people is using ‘cloud’ solutions – but as you do well know cloud is nothing else than ‘someone’s else computer’ 😉 . Having that said that limits options you have available. As this is point of preference I will try not to get into discussion of ‘the best solution’ but will just show you what I have been using and what I really liked a lot.

Solution is called pass and is available on the website https://www.passwordstore.org/

So let’s go ahead and install this on our machine – installation steps are nicely outlined on the product page so here I will just focus on CentOs

sudo yum install pass

As you might have seen from documentation you will need your GPG key(s) – for this demo I have created dummy one

[[email protected] ~]# gpg --list-keys
/root/.gnupg/pubring.gpg
------------------------
pub   2048R/5CBDFF98 2016-10-30
uid                  RafPe <[email protected]>
sub   2048R/B3B34661 2016-10-30

[[email protected] ~]#

 

Let’s go ahead and initialise our pass with GPG key I have created.

[[email protected] ~]# pass init 5CBDFF98
mkdir: created directory ‘/root/.password-store/’
Password store initialized for 5CBDFF98

 

Once the above is completed we can start adding passwords to our safe – simply by issuing

[[email protected] ~]# pass insert Business/serviceA/systemA
mkdir: created directory ‘/root/.password-store/Business’
mkdir: created directory ‘/root/.password-store/Business/serviceA’

 

Listing password then becomes really intuitive

[[email protected] ~]# pass ls
Password Store
└── Business
    └── serviceA
        ├── systemA
        └── systemB

 

To recover password we will just call the tree value

[[email protected] ~]# pass Business/serviceA/systemA

Now we will be asked for our GPG passphrase key in order to retrieve it.

 

 

Here we would now would like to make our password safe more reliable by using GIT to store our secrets. I’m using Gogs (GoGitAsService) which is a lightweight version available.

By issuing the following commmands we get our pass to store secrets in git :

Initialize

# Initialize 
[[email protected] ~]# pass git init

Add remote repository ( here you would need to adjust your remote repository to match – I’m using local docker instance )

[[email protected] ~]# pass git remote add origin http://192.168.178.21:10080/rafpe/passwords.git

Commit all changes

[[email protected] ~]# pass git push -u --all
Username for 'http://192.168.178.21:10080': rafpe
Password for 'http://[email protected]:10080':
Counting objects: 7, done.
Compressing objects: 100% (5/5), done.
Writing objects: 100% (7/7), 1.05 KiB | 0 bytes/s, done.
Total 7 (delta 0), reused 0 (delta 0)
To http://192.168.178.21:10080/rafpe/passwords.git
 * [new branch]      master -> master
Branch master set up to track remote branch master from origin.
[[email protected] ~]#

 

Once thats done we can take a peak on our repo which now has encrypted passwords for our specified items.

 

rafpe_passwords_-_gogs__go_git_service

 

From now on whenever I would be making changes I can just push them nicely to GIT and I have everything under control! Documentation has a lot to offer so be sure to check it – more detailed https://git.zx2c4.com/password-store/about/

 

I personally think the product is good – especially in environments where you should not store passwords in ‘clouds’ due to security constraints which may apply.

3

IP address management with PHPipam and docker

Recently I have came across need of having IP address management tool. So I have looked at several options and decided that the best one was phpIPAM for 2 main reasons:

  • API
  • Overlapping subnets

Also the look and feel gives me positive feeling about the product and the fact there is a lot of development being done on github.

phpipam_mainconsole

Using docker to run the application

So I have decided to prepare fully operational docker solution to support this application. Learned on previous mistakes github repository to which I will be referring you to have been accordingly tagged so if any changes occur – you will always be able to follow this post directions.

 

RafPe_docker-phpipam__phpIPam_-_IP_address_management_in_Docker_container_?

 

I would like to avoid duplication of information. Therefore I will just highlight one of possible installation options as rest is mentioned on docker hub and on github.

 

We start of with cloning our repository

git clone https://github.com/RafPe/docker-phpipam.git

 

Once thats done  we can checkout specific tag ( tag associated with content of this post )

git checkout -t v1.0.1

 

and then we have all components needed to run the last command

docker-compose up -d

which in return gives the following output

phpipam_compose_running

 

And off you go ahead with testing. Here couple of points are worth of mentioning:

  • For production run use database backend which has persistent storage – as in this form DB has no persistent storage
  • Consider using SSL

 

Application has a lot of PROs and in my opinion is really worth of looking into if your management tools needs some automation!

 

0

Git – Visualize your repository

Working with any kind of version control system in today’s world of IT should not be even a question. One may remain which one 🙂 I personally use Github / BitBucket and for small factor use Gogs ( the last one itself deserves post of its own … but thats for future 🙂 )

Now once you already use versioning system another interesting discussion/challenge can occur and that is “to branch or not to branch“. Since I know that we could write up complete post here on this subject I will just stick to my personal opinion at the moment of writing this blog – which is “yep – branch 🙂

Ok – that was easy. So now we have reached the stage where we have our “code” which we version and if we do features we even branch – and now how can we visualize and nicely cherry pick changes we are interested in / browse history and do tags ?

By default GIT offers you to use “git log” command. While browsing internet for some cool approach how to do that I came across the following post which showed how to do visualizations ( at least basic one ) .

To make this easy ( if the post would for some reason be not there ) I have made a gist out of it and its below

 

And I think I would just stop there if not the fact that reading a bit more I came across really great tool called ungit .All it takes to install it is

npm install -g ungit

 

Once installed from console just invoke it

ungit

 

and off you go … In theory this is nothing more than just an UI for your GIT repository – but look at this already great view on one of my repos

ungit_main_screen

 

Now the moment I saw this I already thought – “Ok – I’m thinking this will be one of my core tools”. Now I was not wrong. I could nicely expand any of my commits and get access to options further down – and of course details of every of those commits

ungit_specific_commit

 

For me this makes my everyday work now so much easier 🙂 and a bit more cool ! If you are using some different tools – just leave a comment and share your opinion 🙂

 

 

0

Atom cheatsheet

atom-logoIf you are like me and appreciate tools which enables you to work with highlighting multiple standards syntax and as well enable you to be quick and efficient then I recommend using atom.io

And since we want to be as fast as possible below you can find cheatsheet that I have came across of.

 

 

Screenshot 2016-03-01 21.13.14

 

 

2

Logstash – Filtering Vyos syslog data

logstash-logoHey , So in last days/weeks 🙂 I work quite a lot with ELK stack. Especially in getting data from my systems into Elastic. There would not be any problem if not the fact that default parsing did not quite do work. But what would be IT life without challenges ?

So in this post I will explain in short how I have overcome this problem. And I’m sure you would be able to use this or event make it better.

We will look into following:

* Incoming raw data

* Creating filter

* Enjoying results

 

Incoming raw data:

So you got your vyos box doing the hard work on the edge of your network. And now you would like to have control when someone is knocking to your door or to find root cause when troubleshooting firewall rules.

Example of incoming data from my box looks similar to the following :

<4>Dec  6 01:36:00 myfwname kernel: [465183.670329] 
[internet_local-default-D]IN=eth2 OUT= 
MAC=00:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:00 
SRC=5.6.7.8 DST=1.2.3.4 LEN=64 TOS=0x00 
PREC=0x00 TTL=56 ID=10434 DF PROTO=TCP 
SPT=51790 DPT=80 WINDOW=65535 RES=0x00 SYN URGP=0

and if we just apply basic syslog filtering it will not give out all required fields. The problem we challenge here is we need to get out internet rule. Then following that we can see use case for key value filter.

 

Creating filter:

So time has come to use some magical skills of creating configuration for Logstash filter. I would like to put stress that using different approaches can have impact on performance. Both negative and positive. As yet I’m getting familiar with Logstash this might not be the best solution but I will definitely explore this.

You will notice that my filters use conditional statements so I do not process data unnecessary. In my case vyoss traffic is tagged as syslog and contains specific string in the message.

So without further bubbling … We begin with parsing out data from the message that will get for sure extracted.

  if [type] == "syslog" and [message] =~ "myfw" {
    grok {
      break_on_match => false
      match => [
      "message",  "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: \[(?<syslog_pid>.*?)\] %{GREEDYDATA:syslog_message}",
      "syslog_message", "\[(?<firewall_rule>.*?)\]"
      ]
    }

Points of interest here :

  • Grok filter DOES NOT break on match
  • We do match on message and further on extracted syslog_message to get our firewall rule from [ ]

Next we will do changes on fly to our fields using mutate

    # mutate our values
    mutate {
      add_field => [ "event_type", "firewall" ]
      rename => { "firewall_rule" => "[firewall][rule]" }
      gsub => [ "message", "= ", "=xxx" ]            # Here we remove scenario where this value is empty
    }

Points of interest :

  • I add a field event_type  called firewall so in future I would be able to quickly query for those events.
  • I rename my previous field ( firewall_rule ) to nested field
  • And lastly I use gsub to mitigate problem of missing values in key pair

Once this is done I extract remaining values using kv filter which is configured as follow :

    # Apply key value pair
    kv {
      include_keys => ["SRC","DST","PROTO","IN","MAC","SPT","DPT"]
      field_split => " \[\]"
      add_field => {
        "[firewall][source_address]" => "%{SRC}"
        "[firewall][destination_address]" => "%{DST}"
        "[firewall][protocol]" => "%{PROTO}"
        "[firewall][source_port]" => "%{SPT}"
        "[firewall][destination_port]" => "%{DPT}"
        "[firewall][interface_in]" => "%{IN}"
        "[firewall][mac_address]" => "%{MAC}"
      }
    }

Points of interest :

  • I use include_keys so only fields in array will be extracted ( positive impact on performance )
  • I tried field_split to help out with one of previous challenges but that did not make a lot of difference
  • And lastly I specify my new nested fields for extracted values

 

So thats it! The complete file looks following :

filter {
  if [type] == "syslog" and [message] =~ "vmfw" {
    grok {
      break_on_match => false
      match => [
      "message",  "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: \[(?<syslog_pid>.*?)\] %{GREEDYDATA:syslog_message}",
      "syslog_message", "\[(?<firewall_rule>.*?)\]"
      ]
    }
    
    # mutate our values
    mutate {
      add_field => [ "event_type", "firewall" ]
      rename => { "firewall_rule" => "[firewall][rule]" }
      gsub => [ "message", "OUT= MAC=", "MAC=" ]            # Here we remove scenario where this value is empty
    }

    # Apply key value pair
    kv {
      include_keys => ["SRC","DST","PROTO","IN","MAC","SPT","DPT"]
      field_split => " \[\]"
      add_field => {
        "[firewall][source_address]" => "%{SRC}"
        "[firewall][destination_address]" => "%{DST}"
        "[firewall][protocol]" => "%{PROTO}"
        "[firewall][source_port]" => "%{SPT}"
        "[firewall][destination_port]" => "%{DPT}"
        "[firewall][interface_in]" => "%{IN}"
        "[firewall][mac_address]" => "%{MAC}"
      }
    }
  }
}

 

Enjoying results:

We now would need to test this if it really works as we expect it to work. For this to check we of course will use Docker

First I will create the afore mentioned config file and name it logstash.conf . Once thats done we bring up container up with the following command :

docker run -d -p 25666:25666 -v "$PWD":/config-dir logstash logstash -f /config-dir/logstash.conf

This creates container for me which I can then test locally. Now for this to work you need input source ( i.e. tcp / udp and stdout i.e. codec ruby )

Then I will split my screen using tmux and will execute request while looking at results from docker logs

Logstash-vyos-local

 

And thats it! You have beautifully working parsing for your vyos box ! If you have any comments / or improvements – feel free to share!

 

 

0

MacOs – Multiple terminals with customised color scheme

If you are like me 😀 So not closing yourself only to one operating system you then probably operate between the world of Windows and world of Linux 😀

At the moment I have set up my working environment in a way that allows me to work wit both systems. So one one end I got the new and shiny Windows 10 and on the other boot I got Mac OS.

And on Mac Os I have been looking for software that would give me better control and visibility of my sessions than the standard terminals app. With a bit of looking around I found couple of alternatives and the one that really got my attention is iTerm

The way it looks its more than satisfying 🙂 You can see detailed view of horizontal split on the screen below :

Screeny Shot 29 Aug 2015 13.15.42

 

The program has a lot of cool color schemes to offer. What I also did was to edit my profile file accordingly to the mentioned solutions in this post

If you rather just to get details of modification here it is :

export CLICOLOR=1
export LSCOLORS=GxFxCxDxBxegedabagaced
export PS1='\[\033[01;32m\]\[email protected]\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '

 

Hope this will help you to customise your environment up to your needs 😀 If you are using other helpful tools feel free to share in comments!