11

Docker compose v2 – using static network addresses

Docker compose is a really great piece of code 🙂 that will allow you to build better orchestration with your containers. Recent breaking releases introduced a lot of features. While looking at some of them I was wondering about situations in which you build more (or a bit less ) complex containers based environment and do not have service discovery. In some instances you would just like to have static IP addresses.

Now this is perfectly easy to be done when running containers with cli … but how do you do that with compose ? After looking at the documentation I managed to come out with the following

And this is allowing me to specify static IP addresses for my containers using the compose file. For reference you can find the snippet of full file below

version: '2'

services:
  haproxy:
       image: haproxy:latest
       ports:
          - "80:80"
          - "443:443"
       volumes:
          - ${PWD}/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
       restart: always
       networks:
          - widgets
       logging:
        driver: json-file
        options:
          max-size: "100m"
          max-file: "3"
          labels: "haproxy"

  mariadb:
       image: mariadb:latest
       volumes:
          - /vol/appdata/mariadb:/var/lib/mysql
       environment:
          - MYSQL_ROOT_PASSWORD=secret-pw
       restart: always
       networks:
          - widgets
       logging:
         driver: json-file
         options:
           max-size: "100m"
           max-file: "3"
           labels: "mariadb"

  app_orangella:
       image: apache:1.0
       restart: always
       ports:
          - "81:80"
       networks:
          - widgets
       logging:
         driver: json-file
         options:
           max-size: "50m"
           max-file: "3"
           labels: "app_orangella"

networks:
  widgets:
    driver: bridge
    ipam:
     config:
       - subnet: 172.10.0.0/16
         gateway: 172.10.5.254
         aux_addresses:
          haproxy: 172.10.1.2
          mariadb: 172.10.1.3
          app_orangella: 172.10.1.4

 

hope this will get you rolling with Docker compose 🙂

 

1

Docker compose: error while loading shared libraries libz.so.1

I recently got very annoying error on freshly installed CentOS 7 machine when trying to use most up to date docker-compose ( at the moment of writing 1.6.2 ).

Error stated the following when trying to execute compose file :

docker-compose up -d
docker-compose: error while loading shared libraries: libz.so.1: failed to map segment from shared object: Operation not permitted

 

So temporarly I decided to disable SElinux however this has not helped and logs were not helpfull as well in this instance. So after a bit of wondering around on internet I came across this github issue and I tried one of the workarounds which worked in my instance.

Solution was to remount tmp with exec permission by executing :

sudo mount /tmp -o remount,exec

 

0

Ansible role for Redhat 7 CIS baseline

A Compliance fuel gauge with needle pointing to Follow the Rules to illustrate being compliant with regulations, guidelines and standards

Intro

If you are working with environments where certain policies and rules needs to be applied something like CIS baselines will be well known to you.

So it works on basis where you define which points you will apply to your system and from that point onwards you are expected to deliver proof that this is how ur systems are now compliant (or not ) and if you do not apply certain settings what is the reason for it .

However the problem comes when you need to enforce this compliancy on multiple systems and make sure they are all happily running this policies.

Automation:

And here comes the really good part – where you take a configuration management tool like Ansible and create a reusable piece of code which defines your infrastructure. Although looking at CIS baseline documents – if you are to start from zero that would be a lot of work … but …. good friend of mine has spent his time preparing CIS baseline for Redhat 7 which is no available on github in his repository HERE 🙂

 

And for much more interesting info you can always look at his blog under https://blog.verhaar.io

 

Screenshot 2016-03-22 23.07.16

 

 

 

 

1

Docker – ELK with compose v2


This post contains information which are based on the following entry:

Docker compose and ELK – Automate the automated deployment

To get idea how much has changed it’s worth of checking that out 🙂


docker_pack

If you are working with Docker then for sure you are for non stop challenging and interesting times. And since Docker is so actively developed you cannot just make a solution and ‘forget about it’ – you would just miss so much of innovation.

So since I created my ELK stack previously with Docker compose I decided that it is finally good time to move it to the compose v2!

 

 

If you have not heard about breaking changes then there is a quite nice blog post on docker blog where you can get all info that will start you going. To avoid looking all over internet here is the link 

So once you get idea how cool things now can be done we can get things going. We will start off by getting files from Github repository. This time it differs a bit from the previous posts – as then you could get a version of repo which did not have a stable version or just refused to work for some whatever reason. I have used tags on specific version which allows you to get to a specific version of code – in a nutshell it will work 😀

so let’s get to it 😀

git clone https://github.com/RafPe/docker-elk-stack.git
git checkout tags/v2.0.0

Once you have this you can just start it off by typing

docker-compose up -d

This will commence creating containers which gives the following output:

Screenshot 2016-03-07 22.59.48

 

Let’s see if we have all containers running correctly by checking logs :

docker-compose logs

You probably will get similar output as the following:

Screenshot 2016-03-07 23.01.59

 

And thats basically how you would go about creating the stack with default setup – but if you would like to tweak some settings you can check out the following:

Logging:

limited the logging drivers file size and roll over by using the following parts of compose file

logging:
driver: json-file
options:
max-size: "50m"
max-file: "3"
labels: "kibana"

 

Elasticsearch data persistence:

As for most of development tasks I do not use persistent data if you would like to have this for Elasticsearch cluster you will have to change the following line in compose file by specyfing where to store the data

volumes:
# - ${PWD}/elasticsearch/data:/usr/share/elasticsearch/data

 

Logstash configuration:

By default logstash will use demo-logstash.conf which is configured just with beats input and some filtering applied. Once completed data will be sent to elasticsearch. There are more logstash ready config files under ./logstash folder so feel free to explore and possibly use.

 

 

If you would have any comments – leave them behind as I’m interested on your approach as well 😀

 

0

Ansible – IP addresses of all nodes in group

I have been searching this for a while especially when I was setting up GlusterFS. Challenge of getting all IPs of hosts within my group of ansible.

Maybe not the prettiest and elegant solution however does perfectly what is it expected from. Sets variable of IP addresses ( in my example I’m using eth0 address of IPv4 ) and join them into comma seperated result.

 

 

If you have a better approach – please leave a comment as I’m quite interested to read how you tackle this challenge 🙂

 

0

PowerShell – Azure Resource Manager policies

Microsoft does not stop listening to people. Many of IT professionals is heavily using Azure Resource Manager and the natural course of action is to require better control over what can and what cannot be done.

Simple as it may sound Microsoft has now offered ARM policies.  You may find details from 23:22 min on video below

 

From the good side Microsoft has already prepared documentation for us which is waiting here.

Is it difficult ? I personally think it is not – altough there is no GUI but who from Engineers this days uses GUI 🙂 you have option to use either REST API or PowerShell cmdlets (communicating over that API 🙂 )

What polciies gives me control over ? It is build over the following principal :

{
  "if" : {
    <condition> | <logical operator>
  },
  "then" : {
    "effect" : "deny | audit"
  }
}

As you can see we define conditions and operators and based on that we take action like allow or deny.

 

At the moment I’m not droping any extra examples – as documentation have already couple of them – so you might to try them out as you read the details.

 

Happy automating 🙂

4

PowerShell – Autodocument your modules/scripts using markdown

When writing your scripts or modules have you not wished that it would all autodocument itself ? Isnt this what we should be aiming for when creating automations ? 🙂 So automations would automate documenting themselfes ?

This is exactly what automation should be about and today I’m going to show you how I create automated documentation for extremly big modules in seconds. As mentioned before we will be using MarkDown  so it would be great if you would jump to here and get some more info if this is something new to you.

 

Prerequisites

In order for this to work you must have a good habit of documenting your functions. This is the key to the success. Example of such a function documentation using comment based approach can look as following :

function invoke-SomeMagic
{
      <#
        .SYNOPSIS
        Creates magical events

        .PARAMETER NumberOfPeople
        This paramter defines how many people are looking at your screen in the time of invoking the cmdlet

        .PARAMETER DifficultyImpression
        This parameter defines how difficult it looks what you are currently doing
        .DESCRIPTION
        This function executes magical events all around you. By defining parameters you have direct control of how difficult it will seems this is and how many people are watching will have direct influence on range of events.


        .EXAMPLE 
        invoke-SomeMagic -NumberOfPeople 1 -DifficultyImpression 10

        Creates really difficult looking magic for one person

        .EXAMPLE 
        invoke-SomeMagic -NumberOfPeople 100 -DifficultyImpression 10

        Creates a magical show
    #>


# Function doing something here 🙂 ...........


}

 

Auto documenting script

Now what an automation would be without automating it 😀 ? Below is my implementation of autodocumenting to MarkDown.

 

What I really like here is the fact that it will generate temporary file during documentation (I discovered encoding gives problems with online PDF converter ) . The whole can be changed to suit your needs and layout requirements.

 

Convert it to PDF

The last stage would be converting it to PDF. At the moment I’m using http://www.markdowntopdf.com/ to convert file prepared by above script. And I must say that results are extremly satisfying.

 

Example

I have prepared small demo how it works in action. For this purposes I have created demo module with 3 dummy functions and then run the script. Below is snippet of how it looks. As mentioned before – I really like that and that kind of file can be nicely send to other engineer to quickly get the mfamiliar with your module.

 

markdown_autodocumentation

0

Powershell with Azure – deploy IaaS with Azure Resource Manager

Thesedays managing cloud should be somethining that is well automated and what you can be really comfortable with. Microsoft Azure when using Azure Resource Manager allows you to manage infrastructure via APIs or via Powershell (which is calling webApis then ).

I have to say that both of the approaches are quite nice. I have already worked some time ago on ARM json templates ( using Visual Studio Addon ) and they enable you to perform advanced operations in declarative way.

Good news is that we can also do that with Powershell as well. I’m aware that all over internet you can find ready scripts that will do deployments with a click of a button 🙂 but it goes about excercising 🙂 as thats how we learn.

First what you should make sure of is that you have Azure Powershell module installed. For now I always have been using WebPlatform installer . Once installed you should have it listed when query for modules

PS S:\get-module *azure* -listavailable
ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   0.9.8      Azure                               {Disable-AzureServiceProjectRemoteDesktop, Enable-AzureSer...

 

With the above being prerequisite we can continue and go further with our excercise. Our target will be to deploy 4 virtual machines. First of them will become a domain controller and should have static IP address. Remaining will be using dynamic addresses. Also we will not be creating availability groups and we will only have one public IP address ( we will investigate different setup in one of next posts ) which will expose 3389 port for us ( we will be restricting that via security groups altough )

Of course I dont have to remind you that for this you need valid Azure subscription ( but I assume you have one – even trial 🙂  ). The script as a whole is available via github and will be linked by the end of this post.

 

General setup

First we start of with setting up our azure subscription credentials and defining subscription details and amoount of VMs to be created.  Here we start off with getting our credentials (if we would use Azure AD and delegate credentials to newly created user we could pass PScredential object as argument ) . Later on we select one of available subscriptions (we can use out-grid to give enduser option of selecting ).

 

 

#region Setup subscription and script variables

## Get up your credentials (only if using Azure AD )
# credentialSubscription = Get-Credential 
# Add-AzureAccount -Credential 

## Enable debugging
$DebugPreference ='Continue'


## Login into your subscription
Add-AzureAccount 

# using grid view select a subscription - in scripted scenario you will have the ID supplied to this script 
$subscriptionId =  (Get-AzureSubscription |Out-GridView -Title 'Choose your subscription' -PassThru).SubscriptionId

# Since user can click cancel if we dont have a subscriptionId we quit - again only for non automated scenario 
if ( [string]::IsNullOrEmpty( $subscriptionId ) ) { return }

# If you have more than 1 subscription associated it might be handy to choose current 🙂 
Select-AzureSubscription -SubscriptionId $subscriptionId -Current

## Switch to ARM - be aware that this functionality will be deprecated 
## https://github.com/Azure/azure-powershell/wiki/Deprecation-of-Switch-AzureMode-in-Azure-PowerShell 
Switch-AzureMode AzureResourceManager

## Check available locations 
$azureLocations= Get-AzureLocation 

## Check which locations we have available 
($azureLocations | Where-Object Name -eq ResourceGroup).Locations

## Select one location (we could use the grid view however I will deploy in West Europe)
$selectedAzureLocation = 'West Europe'

## Check registrered providers - also useful when looking for resources 
Get-AzureProvider

## Define resources prefix that we will use across this script 
## If this would be coming from variable outside of script we would need to make sure its lowercase 
$rscprefix = ('armdeploy').ToLower()

## Create tags to be used later 
$tags = New-Object System.Collections.ArrayList
$tags.Add( @{ Name = 'project'; Value = 'armdeploy' } )
$tags.Add( @{ Name = 'env'; Value = 'demo' } )

## Another way to create tags 
$tags = @( @{ Name='project'; Value='armdeploy' }, @{ Name='env'; Value='demo'} )

## Number of VMs to create 
$VMcount=4

#endregion

 

What is the most important here is the part of switching azure operations mode done with :

Switch-AzureMode AzureResourceManager

This command have been deprecated and will not be available in future! Please take a look at post here for more detailed information!

And since I like to be in control of whats going on I tend to change output to be more verbose on debug side. This is done easily bu specyfying :

## Enable debugging
$DebugPreference ='Continue'

 

Create resource group

Within Azure nowadays we have concept of resource groups which are form of “containers” for keeping resources related to each other together. So if we want to create new obects we must start with resource group. Creating of it its quite easy.

#region Create Azure Resource Group

## Prepare resource group name 
$rscgName = "rg-${rscprefix}"

## Check if our resource group exists 
If (!(Test-AzureResourceGroup -ResourceGroupName $rscgName))
    {
        # Does not exist - create it - also set up deployment name
        $resourceGroup = New-AzureResourceGroup -Name $rscgName -Location $selectedAzureLocation -Tag $tags -DeploymentName "deploy-${rscgName}"
    }
else
    {
        # Exists - get the resource by resource group name
        $resourceGroup = Get-AzureResourceGroup -Name $rscgName
    }


#endregion

 

Through the rest of the post you will see me checking for resources using Test-<typeOfResource> however looking at gitHub shows that some of those are depracated as well. So it migth be that this part will require a bit of rework.

Create storage account

In order to store OS and data disks we must have object within azure. And here we can utilize Azure Storage accounts. For this we create account – but in real life scenario you would just go ahead and use existing one for example.

#region Create Azure Storage Account

# Since we need to store the virutal machine data somewhere we need storage account 
# we can script in a way that if it does not exist it will be created within our resource group

## Storage account name 
$saName="sa${rscprefix}"


## handy method to find valid types for storage account type

# First we get our command 
$gc= get-command New-AzureStorageAccount

# Then we navigate to property holding attributes which are TypeId of System.Management.Automation.ValidateSetAttribute 
# so it is clearly visible that those will be validate set values 
$gc.Parameters['Type'].Attributes.ValidValues


# Based on the above lets choose a type for the storage account 
$saType = 'Standard_LRS'


## Here we will check if we have storage account within our resource group
if (!(Test-AzureResource -ResourceName $saName -ResourceType 'Microsoft.Storage/storageAccounts' -ResourceGroupName $resourceGroup)) 
    {
        # No storage account so lets go ahead and create it based on parameters we have above
        $sa = New-AzureStorageAccount -Name $saName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -Type $saType

    }
else
    {
        # Storage account exists - lets grab its resource 
        $storageAccount = Get-AzureStorageAccount -ResourceGroupName $resourceGroup -Name $saName
    }


## Once this is completed lets set subscription storage account since we do have ID already 
Set-AzureSubscription -SubscriptionId $subscriptionId -CurrentStorageAccountName $saName


#endregion

 

Create virtual networks

In order to have networking running properly you need network. I really like concept of virtual networks and subnets and with this connecting directly with network interfaces and other objects things start to make sense – it all interconnects 🙂

#region Create Azure Virtual Network

$vnetName = "vnet-${rscprefix}"


$vNetsubnet1Name = "${rscprefix}-subnet1"

$vNetsubnet2Name = "${rscprefix}-subnet2"

# Create Virtual Network if it doesn't exist
if (!(Test-AzureResource -ResourceName $vnetName -ResourceType 'Microsoft.Network/virtualNetworks' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
{
    
    # Create first subnet 
    $vSubnet1 = New-AzureVirtualNetworkSubnetConfig -Name $vNetsubnet1Name -AddressPrefix '10.0.1.0/24'

    # Create second subnet
    $vSubnet2 = New-AzureVirtualNetworkSubnetConfig -Name $vNetsubnet2Name -AddressPrefix '10.0.2.0/24'

    # Create virtual network
    $vNetwork = New-AzureVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AddressPrefix '10.0.0.0/16' -Subnet $vSubnet1, $vSubnet2 -Tag $tags

} 
else 
{

    # retrieve virtual network if already exists
    $vNetwork = Get-AzureVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup.ResourceGroupName

}

#endregion

You can see in above that I create 2 subnets. Altough I could get away with one – next one we might use in upcoming posts

 

Create public IP address

As mentioned before – I’m after a really simple set up here. So I will just create single public IP address (and make sure it is resolvable with DNS ) which I will be using later to connect to VMs

#region Create Azure Public IP address

$publicIPname = "pip-${rscprefix}" # PublicIP => pip

$publicIPdns = "dns-${rscprefix}"  # this will be our DNS name


## Here we check pi
$retryCntDns = 0

do
{
    $retryCntDns++
    $publicIPdns="dns-${rscprefix}-${retryCntDns}"
    $domainAvailable = ( Test-AzureDnsAvailability -DomainQualifiedName $publicIPdns -Location $selectedAzureLocation )
}
while(!$domainAvailable -or $retryCntDns -eq 3  ) 

# Check if we have our resource already existing 
if (!(Test-AzureResource -ResourceName $publicIPname -ResourceType 'Microsoft.Network/publicIPAddresses' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
{
    If (!$domainAvailable)
    {
        # we dont have public domain available here - we create without DNS entry
        $publicIp = New-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AllocationMethod Dynamic -Tag $tags
    }
    else
    {
        # We do have dns available - lets create it with DNS name
        $publicIp = New-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AllocationMethod Dynamic -DomainNameLabel $publicIPdns -Tag $tags
    }
    

} 
else 
{
    # Seems like we have already public IP address so we can just go ahead and retrieve it
    $publicIp = Get-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName
}

#endregion

 

Create network security group

To provide security we can now define ACLs on objects like subnets / network interfaces which allows us to have granular security. Below I will just create one for remote desktop access (in this example allow from any destination – which is not a good thing in production )

#region Create Network Security Group & Rules

# Define unique name for NSG resource
$nsgName = "nsg-${rscprefix}"


if (!(Test-AzureResource -ResourceName $nsgName -ResourceType 'Microsoft.Network/networkSecurityGroups' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
{

    # Create RDP access rule (at the script allow from everywhere - you should investigate this for your environment security)
    $nsgRule_RDP = New-AzureNetworkSecurityRuleConfig `
        -Name 'allow-in-rdp' `
        -Description 'Allow Remote Desktop Access' `
        -SourceAddressPrefix * `
        -DestinationAddressPrefix * `
        -Protocol Tcp `
        -SourcePortRange * `
        -DestinationPortRange 3389 `
        -Direction Inbound `
        -Access Allow `
        -Priority 100

    # Create Network Security Group with Rule above 
    $nsg = New-AzureNetworkSecurityGroup -Name $nsgName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SecurityRules $nsgRule_RDP -Tag $tags

} 
else 
{
    # Get NSG if already created
    $nsg = Get-AzureNetworkSecurityGroup -Name $nsgName -ResourceGroupName $resourceGroup.ResourceGroupName
}

#endregion

 

Create network interfaces

Now to all connect to each other we create network interfaces. To first interface we will add additionaly our public IP address

#region Define network interfaces 

$networkInterfaces = @() # We will use this array to hold our network interfaces

# For each VM we will create a network interface

for ($count = 1; $count -le $VMcount; $count++) 
{

    $nicName = "${rscprefix}-nic${count}"

    if (!(Test-AzureResource -ResourceName $nicName -ResourceType 'Microsoft.Network/networkInterfaces' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
    {

        $nicIndex = $count – 1
        
        # The first VM will be our domain controller/DNS and it needs static IP address 
        if ($count -eq 1)
        {
            $networkInterfaces += New-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SubnetId $vNetwork.Subnets[0].Id -NetworkSecurityGroupId $nsg.Id -IpConfigurationName 'ipconfig-dc01' -PrivateIpAddress 10.0.1.4 -PublicIpAddressId $publicIp.Id
        }
        else
        {
            $networkInterfaces += New-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SubnetId $vNetwork.Subnets[0].Id -NetworkSecurityGroupId $nsg.Id
        }
         

    } 
    else 
    {
        # retrieve existing
        $networkInterfaces += Get-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName
    }

}

#endregion

 

Provision VMs

And now the time has come to finally provision virtual machines based on resources we got prepared for this.

#region Provision virtual machines


## If you would like to you could use those to present enduser (or yourself) with visual option to choose publisher/offer and SKU 
## as this is scripted version example we will use hardcoded values 

#$publisherName = ( Get-AzureVMImagePublisher -Location $selectedAzureLocation ).PublisherName | Out-GridView -Title 'Select a VM Image Publisher ...'  -PassThru
#$offerName = ( Get-AzureVMImageOffer -PublisherName $publisherName -Location $selectedAzureLocation ).Offer | Out-GridView -Title 'Select a VM Image Offer ...' -PassThru
#$skuName = ( Get-AzureVMImageSku -PublisherName $publisherName -Offer $offerName -Location $selectedAzureLocation ).Skus |Out-GridView -Title 'Select a VM Image SKU' -PassThru

$publisherName = 'MicrosoftWindowsServer'
$offerName='WindowsServer'
$skuName='2016-Technical-Preview-3-with-Containers'

# Take latest version
$version = 'latest'

# We will use basic version of VMs - later we will be able to provision it further
$vmSize = 'Basic_A1'

# Get credentials for admin account - you may want to modify username
$vmAdminCreds = Get-Credential Adminowski -Message 'Provide credentials for admin account'

# array to hold VMs
$vm = @()

# Create VMs
for ($count = 1; $count -le $VMcount; $count++) 
{ 
    
    # create suffixed VM name
    $vmName = "vm-${count}"

    # Check if resource already exists
    if (!(Test-AzureResource -ResourceName $vmName -ResourceType 'Microsoft.Compute/virtualMachines' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
    {
        
        $vmIndex = $count - 1

        $osDiskLabel = 'OSDisk'
    
        $osDiskName = "${rscprefix}-${vmName}-osdisk"

        $osDiskUri = $sa.PrimaryEndpoints.Blob.ToString() + "vhds/${osDiskName}.vhd"

        $dataDiskSize = 200 # Size in GB

        $dataDiskLabel = 'DataDisk01'

        $dataDiskName = "${rscprefix}-${vmName}-datadisk01"

        $dataDiskUri = $sa.PrimaryEndpoints.Blob.ToString() + "vhds/${dataDiskName}.vhd"

        $vmConfig =  New-AzureVMConfig -VMName $vmName -VMSize $vmSize | `
            Set-AzureVMOperatingSystem `
                -Windows `
                -ComputerName $vmName `
                -Credential $vmAdminCreds `
                -ProvisionVMAgent `
                -EnableAutoUpdate |
            Set-AzureVMSourceImage `
                -PublisherName $publisherName `
                -Offer $offerName `
                -Skus $skuName `
                -Version $version |
            Set-AzureVMOSDisk `
                -Name $osDiskLabel `
                -VhdUri $osDiskUri `
                -CreateOption fromImage |
            Add-AzureVMDataDisk `
                -Name $dataDiskLabel `
                -DiskSizeInGB $dataDiskSize `
                -VhdUri $dataDiskURI `
                -CreateOption empty |
            Add-AzureVMNetworkInterface `
                -Id $networkInterfaces[$vmIndex].Id `
                -Primary

        New-AzureVM `
            -VM $vmConfig `
            -ResourceGroupName $resourceGroup.ResourceGroupName `
            -Location $selectedAzureLocation `
            -Tags $tags

    }
    else
    {
        # Get the VM if already provisioned
        $vm += Get-AzureVM -Name $vmName -ResourceGroupName $resourceGroup.ResourceGroupName
    }


}

#endregion

 

Look at completed action

Once the whole script completes we get direct access to our newly created reources . It looks good and is a worth noting starting point for autoation and orchestration. From here next logical step is to describe how this infrastructure should be configured with DSC and that is something we will do with on of our next posts.

 

2015-09-29_11h30_31

 

Happy powershelling 🙂

 

 

 

Script in full

0

Running ElasticSearch/Kibana and Logstash on Docker

In todays world if you are new to the combination of words in subject means you need to quickly catch up 😀 In IT world Docker is introducing new way to how we operate. Days when you need 20 sysadmins to make deployment successful are now long gone. You could say nowadays we get DevOps that with a click of a button change the world 😀

Today we will discuss how with Docker running  ElasticSearch + Logstash and Kibana you can visualise your environment behaviour and events. At this stage i would like to point out that this can be useful not only in IT where you get insights to what is going on with your infrastructure but also it has a great potential in the era of IoT . In a single “go” you will build required components to see its potential.

Since this will be only touching real basics I will try to point you to more interesting sources of information.

The whole excercise will be done on host running Ubuntu with the following version installed

Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty

I already have followed Docker docs on installing Docker engine on this OS. So make sure you have the engine installed.

As a quick verification this is verison of Docker running during writeup of this post

Client:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

 

So since we got that ready let’s fire up an instance of ElasticSearch. Since we would like to store data outside of the container we need to make folder somewhere on the host. Since this is only non-production excercise I will just use simple folder in root structure. For this purposes I have created folder called cDocker and within there created subfolder data/elasticsearch . This can be achieved by running the following in console

sudo mkdir -p cDocker/data/elasticsearch

Once ready we can kick off creation of our container

sudo docker run -d --name elasticsearch -p 9200:9200 -v cDocker/data/elasticsearch:/usr/share/elasticsearch/data elasticsearch

After a moment of pulling all required image layers we can see container running on our Docker host

docker_elasticSearch_createdok

 

For communicating with API you can see we have exposed port 9200. For ease of making API calls I will be using Postman addon for Chrome. With that we wlll send GET request to address of http(s)://<IP>:9200/_status which in return should come back with our instance status. In my case everything works out of the box so reply looks following

elasticsearch_api_status_ok

 

For the next part we will create LogStash container. We do this by creating container based on LogStash image. The main difference here is that we will link our elasticsearch container so they will be able to talk to each other.

docker run -d --name logstash -p 25826:25826 -p 25826:25826/udp -v $(pwd)/conf:/conf --link elasticsearch:db logstash logstash -f /conf/first.conf

In the above we expose port 25826 TCP/UDP and mount volume for configuration (here I use $(pwd) for existing folder in my current console session ) . Next we link our elasticsearch container and give it db alias.  Remaining is the name of the image and initial command to be executed.

Now if you paid close attention i specified that we will be using config file called first.conf since that file does not exist we must create it. Contents of those file come directly from Logstash documentation and are real basic configuration enabling us to see working solution

Now if I open 2 session windows – one to tail logstash container logs and other one to create a telnet connection to 25826 then we will see message I type into telnet session will get translated and forwarded to elasticsearch.

logstash_testmessage_ok

 

Of course this kind of configuration in this instance is only good for excercise and shows quickly how nicely we can get the system running

So since thats ready it’s time to set up Kibana. Its quite easy using the default image from Docker Hub . I have choosen to link containers for ease of this excercise

docker run --name kibana -e --link elasticsearch:elasticsearch -p 5601:5601 -d kibana

And now seconds later we can login to our Kibana server and take a look on our forensic details 🙂 The message we sent before as a test message is already visible! How cool is that 😀 ?

 

 

kibana_firstevent_test

Lets add some extra fake messages to make some visualisation of it. I will be doing that using telnet command and sending some dummy messages to logstash

After thats done 🙂 we can then create visualizations – and from there onwards … so awesome dashboards. For purposes of this excercise I have just created basic pie charts to show you how it can look like. Of course there is much more power there and you should explore resources available for this if you want to do more 😀

kibana_firstdashboardtest

 

Well that concludes this short introduction to logging with ELK stack. There are of course a lot of other considerations when setting this up for production. Using redis to avoid bottleneck with lost messages / avoid complex message parsing etc. We will try to look into some of those in upcoming posts!

 

 

0

Rise of Docker – the game changer technology

If you have not yet been working with Docker containers – worse… if you have not yet really hard about Docker and significant changes it brings – then you should find more information!

In simple words Docker does the thing we always wanted – it isolates applications from our hosts layer. This enables possibility of creating micro services that can be dynamically scaled / updated / re-deployed !

If you would like to imagine how this whole Docker works then I’m sure that by looking on image below you will get the grasp of the idea behind!

DockerWithWindowsSrvAndLinux

 

 

So things to keep in mind are that Docker is not some black magic box … it requires underlying host components to run on top of it. What I mean by that If you need to run Windows containers you will need Windows host and the same principal will apply for the Linux container. You will also need Linux Host.

Up to this point there was no real support on Docker containers on Windows. However by the time of writing this document Microsoft has released Windows Server 2016 which brings major breaking changes and primarly of our interest the support to containers!

One of the thing that Microsoft have made people aware is that you will be able to  manage containers with Docker and with Powershell … but …. yep there is a but – containers created with one cannot be managed with other one . I think thats a fair trade of but thats something that potentially is going to be changed.

 

In the meantime I invite you to explore docker hub and help yourself by getting more detailed information when exploring the docker docs 

In one of the next posts we will discuss how to get Windows docker container running on Windows Server 2016 (TP3 ) !  With that keep intro to Docker I hope to see you again!