0

C# – Generate Entity Framework SQL script

This one is going to be really short one. As It happens that I need to enable others to recreate DBs for API models I create I usually deliver them SQL script that does the work. So how do you generate one in VS ?

Well using package manager you just call

Update-Database -Script -SourceMigration:0

 

And that creates you SQL script – of course without any seed data 🙂

4

PowerShell – Autodocument your modules/scripts using markdown

When writing your scripts or modules have you not wished that it would all autodocument itself ? Isnt this what we should be aiming for when creating automations ? 🙂 So automations would automate documenting themselfes ?

This is exactly what automation should be about and today I’m going to show you how I create automated documentation for extremly big modules in seconds. As mentioned before we will be using MarkDown  so it would be great if you would jump to here and get some more info if this is something new to you.

 

Prerequisites

In order for this to work you must have a good habit of documenting your functions. This is the key to the success. Example of such a function documentation using comment based approach can look as following :

function invoke-SomeMagic
{
      <#
        .SYNOPSIS
        Creates magical events

        .PARAMETER NumberOfPeople
        This paramter defines how many people are looking at your screen in the time of invoking the cmdlet

        .PARAMETER DifficultyImpression
        This parameter defines how difficult it looks what you are currently doing
        .DESCRIPTION
        This function executes magical events all around you. By defining parameters you have direct control of how difficult it will seems this is and how many people are watching will have direct influence on range of events.


        .EXAMPLE 
        invoke-SomeMagic -NumberOfPeople 1 -DifficultyImpression 10

        Creates really difficult looking magic for one person

        .EXAMPLE 
        invoke-SomeMagic -NumberOfPeople 100 -DifficultyImpression 10

        Creates a magical show
    #>


# Function doing something here :) ...........


}

 

Auto documenting script

Now what an automation would be without automating it 😀 ? Below is my implementation of autodocumenting to MarkDown.

 

What I really like here is the fact that it will generate temporary file during documentation (I discovered encoding gives problems with online PDF converter ) . The whole can be changed to suit your needs and layout requirements.

 

Convert it to PDF

The last stage would be converting it to PDF. At the moment I’m using http://www.markdowntopdf.com/ to convert file prepared by above script. And I must say that results are extremly satisfying.

 

Example

I have prepared small demo how it works in action. For this purposes I have created demo module with 3 dummy functions and then run the script. Below is snippet of how it looks. As mentioned before – I really like that and that kind of file can be nicely send to other engineer to quickly get the mfamiliar with your module.

 

markdown_autodocumentation

0

Powershell – Network cmdlets

In effort to move away from old school habits of using i.e. nslookup instead of PS cmdlets I thought it would be beneficial if for reference I would reblog quite interesting article about replacement of those cmd into pure PowerShell cmdlets. Original article you can find under here

IPCONFIG

Used to get ip configuration.

Get-NetIPConfiguration
Get-NetIPAddress

Get-NetIPConfiguration
Get-NetIPAddress | Sort InterfaceIndex | FT InterfaceIndex, InterfaceAlias, AddressFamily, IPAddress, PrefixLength -Autosize
Get-NetIPAddress | ? AddressFamily -eq IPv4 | FT –AutoSize
Get-NetAdapter Wi-Fi | Get-NetIPAddress | FT -AutoSize

 

PING

Check connectivity to target host.

Test-NetConnection

Test-NetConnection www.microsoft.com
Test-NetConnection -ComputerName www.microsoft.com -InformationLevel Detailed
Test-NetConnection -ComputerName www.microsoft.com | Select -ExpandProperty PingReplyDetails | FT Address, Status, RoundTripTime
1..10 | % { Test-NetConnection -ComputerName www.microsoft.com -RemotePort 80 } | FT -AutoSize

 

NSLOOKUP

Translate IP to name or vice versa

Resolve-DnsName

Resolve-DnsName www.microsoft.com
Resolve-DnsName microsoft.com -type SOA
Resolve-DnsName microsoft.com -Server 8.8.8.8 –Type A

 

ROUTE

Shows the IP routes (also can be used to add/remove )

Get-NetRoute
New-NetRoute
Remove-NetRoute

Get-NetRoute -Protocol Local -DestinationPrefix 192.168*
Get-NetAdapter Wi-Fi | Get-NetRoute

 

TRACERT

Trace route. Shows the IP route to a host, including all the hops between your computer and that host.

Test-NetConnection –TraceRoute

Test-NetConnection www.microsoft.com –TraceRoute
Test-NetConnection outlook.com -TraceRoute | Select -ExpandProperty TraceRoute | % { Resolve-DnsName $_ -type PTR -ErrorAction SilentlyContinue }

 

 

NETSTAT

Description: Shows current TCP/IP network connections.

Get-NetTCPConnection

Get-NetTCPConnection | Group State, RemotePort | Sort Count | FT Count, Name –Autosize
Get-NetTCPConnection | ? State -eq Established | FT –Autosize
Get-NetTCPConnection | ? State -eq Established | ? RemoteAddress -notlike 127* | % { $_; Resolve-DnsName $_.RemoteAddress -type PTR -ErrorAction SilentlyContinue }

 

 

So happy moving into objectirezed world of PowerShell 🙂

 

3

PowerShell – Active Directory changes synchronization with cookie

In today’s post I wanted to show you something that can be of interest for those who need to find recent Active Directory changes but are challenged by i.e. big AD forest with a large amount of object and are hitting performance problems when executing queries.  So where this problem comes from ? Well if you have Active Directory with a lot ( really a lot of objects ) then querying quite often for changes can be troublesome.

But dont worry – there are couple of ways to tackle this challenge. If you look for more details you will find that you can just query information (duh ?! ) / subscribe yourself to be notified when changes occur (push) / or make incremental queries (pull). And today we will exactly investigate querying using synchronization cookie

The principal here is to use cookie which will allow us to poll for changes since last time we queried AD. This way we can have only very specific query and return only subset of properties we are really interested with.

 

The whole code is quite simple to implement and consits of the following :

And that would be all for this. So from the code above you see that your subsequent requests would be based on changes since last poll (of course based on the query your provided ). In one of next posts we will focus on getting this in C# as some of you may want to do more DevOps

 

 

 

 

 

0

OpenSSL – generate self signed certificate

Quite often to test different aspects of IT or security we use certificates which are self signed. As the name implies we are responsible for generating them. In this post we will go through short explanation how to generate one with use of openssl.

To create one we will issue the following command :

openssl req -x509 -newkey rsa:2048 -keyout certificate-key.pem -out certificate.pem -days 365

In order to better understand above command lets break it down :

  • req : PKCS#10 certificate request and certificate generating utility
  • -x509 : we receive self signed certificate as output instead of certificate request
  • -newkey rsa:#### : creates a new certificate request and new private key. In this instance we use RSA with size of #### bits
  • -keyout file.name : outputs just created private key into a file
  • -out file.name : specifies output file name
  • -days # : specifies how many days the certificate will be valid when x509 option have been used. Default value for this setting is 30 days
  • -nodes : indicates that private key should not be encrypted

 

For those being on windows we sometimes need to get PFX (which contains private and public key ). Easiest is to use OpenSSL in the following form :

openssl pkcs12 -inkey bob_key.pem -in bob_cert.cert -export -out bob_pfx.pfx

 

Since some of you will be working on windows you might get across the following error :

WARNING: can't open config file: /usr/local/ssl/openssl.cnf

then what you are missing is setting for a environmental variable (*make sure to adjust path to your cfg file ):

set OPENSSL_CONF=c:\OpenSSL-Win32\bin\openssl.cfg

 

And thats it for self signed certificate. In next post we will use knowledge of certificates with the power of Docker and will set up our own registry

 

2

SSL file standards explained

While browsing net I cam across interesting post on serverfault and I thought it would be nice to have it as a point of reference , especially when working with certificates

Below you may find the most popular standards :

  • .csr This is a Certificate Signing Request. Some applications can generate these for submission to certificate-authorities. The actual format is PKCS10 which is defined in RFC 2986. It includes some/all of the key details of the requested certificate such as subject, organization, state, whatnot, as well as the public key of the certificate to get signed. These get signed by the CA and a certificate is returned. The returned certificate is the public certificate (not the key), which itself can be in a couple of formats.
  • .pem Defined in RFC’s 1421 through 1424, this is a container format that may include just the public certificate (such as with Apache installs, and CA certificate files /etc/ssl/certs), or may include an entire certificate chain including public key, private key, and root certificates. The name is from Privacy Enhanced Mail (PEM), a failed method for secure email but the container format it used lives on, and is a base64 translation of the x509 ASN.1 keys.
  • .key This is a PEM formatted file containing just the private-key of a specific certificate and is merely a conventional name and not a standardized one. In Apache installs, this frequently resides in /etc/ssl/private. The rights on these files are very important, and some programs will refuse to load these certificates if they are set wrong.
  • .pkcs12 .pfx .p12 Originally defined by RSA in the Public-Key Cryptography Standards, the “12” variant was enhanced by Microsoft. This is a passworded container format that contains both public and private certificate pairs. Unlike .pem files, this container is fully encrypted. Openssl can turn this into a .pem file with both public and private keys: openssl pkcs12 -in file-to-convert.p12 -out converted-file.pem -nodes

A few other formats that show up from time to time:

  • .der A way to encode ASN.1 syntax in binary, a .pem file is just a Base64 encoded .der file. OpenSSL can convert these to .pem (openssl x509 -inform der -in to-convert.der -out converted.pem). Windows sees these as Certificate files. By default, Windows will export certificates as .DER formatted files with a different extension. Like…
  • .cert .cer .crt A .pem (or rarely .der) formatted file with a different extension, one that is recognized by Windows Explorer as a certificate, which .pem is not.
  • .p7b Defined in RFC 2315, this is a format used by windows for certificate interchange. Java understands these natively. Unlike .pem style certificates, this format has a defined way to include certification-path certificates.
  • .crl A certificate revocation list. Certificate Authorities produce these as a way to de-authorize certificates before expiration. You can sometimes download them from CA websites.

In summary, there are four different ways to present certificates and their components:

  • PEM Governed by RFCs, it’s used preferentially by open-source software. It can have a variety of extensions (.pem, .key, .cer, .cert, more)
  • PKCS7 An open standard used by Java and supported by Windows. Does not contain private key material.
  • PKCS12 A private standard that provides enhanced security versus the plain-text PEM format. This can contain private key material. It’s used preferentially by Windows systems, and can be freely converted to PEM format through use of openssl.
  • DER The parent format of PEM. It’s useful to think of it as a binary version of the base64-encoded PEM file. Not routinely used by much outside of Windows.
1

Azure Files on Ubuntu

If you have not seen recent post on Azure blog , then I would like to let you know that Azure Files are now GA. Details of this blog entry are available here.

Since I would not like to make duplicate of content I’m going to show you how you can get the Azure File share mapped on your linux boxes. Why linux boxes ? I already have tryzylion ideas of usage for this – major one is Docker and containers which I would like to make HA or my own Docker repository.

 

Creation of files via portal is extremly easy and intuitive

azure_files_firstview

 

Install tools

We need to install the following package if not alredy present ( I become a fan of ubuntu 🙂 :

sudo apt-get install cifs-utils

 

Mount fileshare

Then next step is mounting the share. This has some limitations based on SMB protocol version being used (for more detailed info look into the mentioned azure blog post link ) .  I will be using in this instance SMB v3 so we are good to go on using AF on premises.

sudo mount -t cifs //rafpeninja.file.core.windows.net/docker-demo-data ./dockerdemodata -o vers=3.0,username=rafpeninja,password=YourAwesomeStorageKey==,dir_mode=0777,file_mode=0777

 

As I did not want to play yet with any restrictions the permissions are kind high 🙂 but you can modify them as you need

 

Simple test

Once this is done you can head to the folder and create a sample file.

sudo touch test.me

 

When done you can see that file instantly via the portal

azure_file_test_ok

 

 

And here you go – your file is immiediately available. If you got any scenarios where you already use those I’m keen to hear about it !

 

 

 

0

Pester – Introduction to test driven development (TDD) for Powershell

tdd-logo-01Today I wanted to start a series on Pester for PowerShell. If you have not heard about it before you might find it quite interesting. It allows you to write code and test it alongside.

Life example why would this be useful ? Nothing easier to do – imagine complex functions executing chained operations. Making small modification to one piece might not have any drawbacks in the whole operation … but are you sure ? It might appear that this small modification was used somewhere down along the chain and initially impact would not be seen.

And this is where Pester comes to the rescue! By getting into habit of writing in this way you will save yourself from butterfly effects. I can assure that thanks to this approach I was able to avoid several situation where exactly small changes without visible impact would break a lot of things 🙂

Get familiar

Pester is actively developed on Github and you can head to project page . I can recommend to check out Wiki page and open issues as those 2 are extremly useful sources of information.

 

Install Pester

Well there is not much to say 🙂 With new and shiny Powershell it cannot be simpler than :

Pester_install

 

 

And that was it – you are now set up for your first test.

 

First test

In order to run simple test we will create our selves 2 files. One for our real function and one for tests. Pester makes it really easy and therefore we can use build in cmdlet to prepare those 2 for us :

New-Fixture -Name FirstFunctionTest

pester_first_func

 

Lets make dummy functions in our FirstFunctionTest.ps1 file. I will be really easy on this example 🙂

function FirstFunctionTest
{
        return 1
}

And now lets move to file FirstFunctionTest.Tests.ps1 and write the following

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "FirstFunctionTest" {
    It "returns value that is not null" {
        FirstFunctionTest | Should Not BeNullOrEmpty
    }

    It "returns value that is exactly 1" {
        FirstFunctionTest | Should be 1
    }
}

 

The majority of the code was prepared by Pester. At the moment I just defined that it must return value and not null and return value must be equal to 1. Great! Let’s run this simple test now

invoke-pester

And the results are instant 🙂

pester_first_functest

 

When a change was made

So now we will change our function to return something different – so in nutshell we will simulate fact that a change has been made that can have big impact 😀 
pester_first_functesterrorThanks to Pester you would immiediately see this 🙂

 

Summary

This is only a small example showing really the top of what Pester can do. In next posts we will be investigating much more complex scenarios. Stay tuned 🙂

 

 

5

Docker compose and ELK – setup in automated way

docker-compose-logo-01Altough originally this was supposed to be short post about setting up ELK stack for logging. However with every moment I have been working with this technology it got me really ‘insipired’ and  I thought it would be worth to start and make it working the right way from the very beggining

 

Now since we are up for automating things we wil try to make use of docker compose which will allow us to setup whole stack in automated way. Docker compose is detailed in here

Compose in short allows you to describe how your services will look like and how do they interact with each other (volumes/ports/links).

In this post we will be using docker + docker-compose on Ubuntu host running in Azure. If you would be wondering why I just show my IP addresses all the time on the screenshots … because those are not load balanced static IP addresses. So every time I spin a host I get a new one 🙂

 


This post contains information which have been updated in post

Docker compose and ELK – Automate the automated deployment

However for gettign idea of how solution works I recommend just reading through 🙂


 

 

Installing Docker-compose

So the first thing we need to do is to install docker-compose. Since as we all now docker is under constant development it is easiest to give you link to gitHub release page rather than direct link which can be out of date

Once installed you can use the following command to make sure it is installed :

docker-compose --version

 

Preparing folder structure

Since we will be using config files and storing elasticsearch data on the host we will need to setup folder structure. I’m aware that this can be done better with variables 🙂 but ubuntu is still learning curve so I will leave it up to you to find better ways 🙂 In the meantime let’s run the following command

sudo mkdir -p /cDocker/elasticsearch/data
sudo mkdir -p /cDocker/logstash/conf
sudo mkdir -p /cDocker/logstash/agent
sudo mkdir -p /cDocker/logstash/central
sudo mkdir -p /cDocker/compose/elk_stack

 

Clone configuration files

Once you have the folder structure we will prepare our config files. To do this we will be cloning gitHub repository (gists ) which I have prepared in advance (and tested as well of course ) .

git clone https://gist.github.com/60c3d7ff1b383e34990a.git /cDocker/compose/elk_stack

git clone https://gist.github.com/6627a2bf05ff956a28a9.git /cDocker/logstash/central/

git clone https://gist.github.com/0cd6594672ebfe1205a5.git /cDocker/logstash/agent/

git clone https://gist.github.com/c897a35f955c9b1aa052.git /cDocker/elasticsearch/data/

 

Since I keep a bit different names on github (this might be subject to change in future ) we need to rename them a bit 🙂 For this you can run following commands

mv /cDocker/compose/elk_stack/docker-compose_elk_with_redis.yml  /cDocker/compose/elk_stack/docker-compose.yml

mv /cDocker/elasticsearch/data/elasticsearch_sample_conf.yml /cDocker/elasticsearch/data/elasticsearch.yml

mv /cDocker/logstash/agent/logstash_config_agent_with_redis.conf /cDocker/logstash/conf/agent.conf

mv /cDocker/logstash/central/logstash_config_central.conf /cDocker/logstash/conf/central.conf

 

Docker compose file

If you look at the code file below you will notice that we define how our image will be build. What ports will be epxosed , what links will be created amongst containers. Thanks to that machines will be created in specific order and linked accordingly, And since we have already prepared configuration files the whole stack will be ready to go.

 

Execute orchestration

Now we have everything in place to set up our first run of orechestration. Our next step is just navigating to compose folder (where our docker-compose file is ) and running following command :

/cDocker/compose/elk_stack#: docker-compose up -d

This will execute pulling of all layers and in creating of services afterwards. Once completed you should see something similar to the following :

docker_compose_elk_stack_ready_01

 

 

Summary

Well and thats it folks! We of course have much more potential to do much more (using variables / labels etc ) however we will do more funky stuff in next posts. Since Azure Files is finally in production we will use it as persistent storage in one of our future posts so stay tuned.

On subject of ready to use ELK stack we will be looking into managing input based on logstash plugins and we will see on our own eyes how this Docker ELK stack will empower our IoT automations!