0

ChatOps using Hubot – Zabbix maintanance

 

 


logo_github

 

This post is suplement to GitHub repo available under https://github.com/RafPe/hubot-zabbix-scripts

 


 

 

So finally day has come when I can write about my recent involvement in automating 🙂 this time with use of hubot ( in this role favorite Bender ) and good Rocket.Chat  .

 

Simple idea:

If we need to do it once – lets automate it as for sure someone else will need to use it also at least once

 

And in most cases its true 🙂 So one day I just woke up quite early. Really too early to go to work already 🙂 and too late to get really good sleep still. So I got the thing which we all think in the morning ….. yezzzz coffee 🙂 And then thought about the things that ppl around me have been doing manually for quite a while :/

The challenge which came out of that short moment of thinking was : “setting zabbix server maintanance with hubot ( bender ) “

 

Getting pieces together:

Now I really liked that idea. It was around 6AM in the morning , my coffee was half way through so I geared up and was ready when opened my laptop. Now what was really challenging here is the fact I have never programmed in Coffee script nor in Python and those 2 main components are used to bake this solution. However at the end of the day its only different gramma for getting things done 🙂

I decided not to reinvent the wheel and looked at things that already work. Since at the moment I have been automating a lot with Ansible I looked at their Github page with extra modules.

And that was exactly what I needed. Then I just went ahead and downloaded the hubot – following nice and simple documentation. Based on the info there getting coffeee script to do exactly what I need was just a matter of minutes 🙂 ( at least I hoped so )

 

So this is a proxy ?

Exactly. Coffee script in hubot makes sure we respond to properly set regex values which corresponds to commands given to our hubot. From there we execute python script.

So I have placed biggest efforts on getting the Python script running. I googled around and managed to get it running with arguments. Which in return opened doors to properly proxy from Coffee script.

 

The final version of python script ( final per write up of this post ) has the following syntax

python zbx-maint.py

usage: zbx-maint.py [-h] -u USER -p PASSWORD [-t TARGET] [-s SERVER] -a ACTION
                    [-l LENGTH] [-d DESC] [-r REQUESTOR] [-i ID]

 -u USER      : used to connect to zabbix - needs perm to create/delete maintanance
 -p PASSWORD  : password for the user above
 -t TARGET    : host/groups to create maintanance on
 -s SERVER    : URL of the zabbix server
 -a ACTION    : del or set
 -l LENGTH    : Number of minutes to have maintanance for
 -d DESC      : Additonal description added to maintanance
 -r REQUESTOR : Used to pass who has requested action
 -i ID        : Name of maintanance - used for deletion

 

What about security ?

All passwords and links used within the hubot script are passed using environment variables. For proper control of processes and isolation I have been using here supervisorD ( which is great tool to do this ).

 

HUBOT_ZBX_USER      : user accessing zabbix
HUBOT_ZBX_PW        : password for the user
HUBOT_ZBX_URL       : zabbix server URL
HUBOT_ZBX_PYMAINT   : full path to zbx-maint.py script (used by coffee script)

 

Bender in action:

So without any further delay this is how it looks in action ….

 

hubot_zbx_maint_v1

 

 

 

Being considered:

I’m still looking for other people feedback to see what can be done better. Most likely I will be publishing some more of zabbix automations to enrich chatops and make life more interesting 🙂

 

 

0

DevOpsdays 2016 Amsterdam – Videos are here

If you have missed for whatever reason DevOps days in Amsterdam this year – then you can watch all published videos on vimeo channel! Just head out and go to HERE

Some of my favorites :

DevOpsdays Amsterdam 2016 Day 1 – Adam Jacob from [email protected] on Vimeo.

DevOpsdays Amsterdam 2016 Day 1 – Avishai Ish-Shalom from [email protected] on Vimeo.

DevOpsdays Amsterdam 2016 Day 1 – Daniël van Gils from [email protected] on Vimeo.

 

Hope you will enjoy as well!

 

1

Docker – ELK with compose v2


This post contains information which are based on the following entry:

Docker compose and ELK – Automate the automated deployment

To get idea how much has changed it’s worth of checking that out 🙂


docker_pack

If you are working with Docker then for sure you are for non stop challenging and interesting times. And since Docker is so actively developed you cannot just make a solution and ‘forget about it’ – you would just miss so much of innovation.

So since I created my ELK stack previously with Docker compose I decided that it is finally good time to move it to the compose v2!

 

 

If you have not heard about breaking changes then there is a quite nice blog post on docker blog where you can get all info that will start you going. To avoid looking all over internet here is the link 

So once you get idea how cool things now can be done we can get things going. We will start off by getting files from Github repository. This time it differs a bit from the previous posts – as then you could get a version of repo which did not have a stable version or just refused to work for some whatever reason. I have used tags on specific version which allows you to get to a specific version of code – in a nutshell it will work 😀

so let’s get to it 😀

git clone https://github.com/RafPe/docker-elk-stack.git
git checkout tags/v2.0.0

Once you have this you can just start it off by typing

docker-compose up -d

This will commence creating containers which gives the following output:

Screenshot 2016-03-07 22.59.48

 

Let’s see if we have all containers running correctly by checking logs :

docker-compose logs

You probably will get similar output as the following:

Screenshot 2016-03-07 23.01.59

 

And thats basically how you would go about creating the stack with default setup – but if you would like to tweak some settings you can check out the following:

Logging:

limited the logging drivers file size and roll over by using the following parts of compose file

logging:
driver: json-file
options:
max-size: "50m"
max-file: "3"
labels: "kibana"

 

Elasticsearch data persistence:

As for most of development tasks I do not use persistent data if you would like to have this for Elasticsearch cluster you will have to change the following line in compose file by specyfing where to store the data

volumes:
# - ${PWD}/elasticsearch/data:/usr/share/elasticsearch/data

 

Logstash configuration:

By default logstash will use demo-logstash.conf which is configured just with beats input and some filtering applied. Once completed data will be sent to elasticsearch. There are more logstash ready config files under ./logstash folder so feel free to explore and possibly use.

 

 

If you would have any comments – leave them behind as I’m interested on your approach as well 😀

 

0

C# – Active Directory changes synchronization with cookie

c-shIn recent post we have discussed how to track Active Directory changes effeciently with PowerShell .

Now the same thing we can achieve with C#. And if you would wonder why C# since we have had it already in PowerShell ? Well maybe you would be writing a form of REST API for your enterprise ? Or writing application for personnel who is not fluent with scripting ( the ppl that do use GUI 🙂  )

Neverless this is going to be nice and easy. I will not be using screenshoots of Visual Studio in this post but just providing you with the information needed.

 

The architecture and design is totally up to you 🙂 I will introduce you to basics needed to put the bits and pieces together. To hold information which we receive it would be best to create a class with properties we will be interested in and hold that in a list.

public class adresult
{
   string objName {get;set;}
   string objDN   {get;set;}
   ...
   string objXYZ  {get;set;} # Whatever else properties you would be interested in 
}

 

That was easy 🙂 Now let’s get to write our application. I focus here on console application but you can you whatever else type suitable for you.

Let’s prepare LDAP connections :

                string ldapSrv = "LDAP://<LDAP-path>";
                string ldapFilter = "(objectClass=user)";

                // File to store our cookie
                string ldapCookie = @"c:\adsync-cookie.dat";

                // set up search
                DirectoryEntry dir = new DirectoryEntry(ldapSrv);
                DirectorySearcher searcher = new DirectorySearcher(dir);

                searcher.Filter = ldapFilter;
                searcher.PropertiesToLoad.Add("name");
                searcher.PropertiesToLoad.Add("distinguishedName");
                searcher.SearchScope = SearchScope.Subtree;
                searcher.ExtendedDN = ExtendedDN.Standard;

 

Next is the interesting – which is synchronization object

// create directory synchronization object
DirectorySynchronization sync = new DirectorySynchronization();

// check whether a cookie file exists and if so, set the dirsync to use it
if (File.Exists(ldapCookie))
   {
      byte[] byteCookie = File.ReadAllBytes(ldapCookie);
      sync.ResetDirectorySynchronizationCookie(byteCookie);
   }

 

Lastly is combining of what we have prepared and executing search

// Assign previously created object to searcher 
searcher.DirectorySynchronization = sync;

// Create group of our objects
List<adresult> ADresults = new List<adresult>();

foreach (SearchResult result in searcher.FindAll())
  {
      adresult objAdresult = new adresult();
      objAdresult.Objname  = (string)result.Properties["name"][0];
      
      string[] sExtendedDn = ((string)result.Properties["distinguishedName"][0]).Split(new Char[] { ';' });
      objAdresult.objDN    = sExtendedDn[2];

      ADresults.Add(objAdresult);
   }

// write new cookie value to file
File.WriteAllBytes(ldapCookie, sync.GetDirectorySynchronizationCookie());

// Return results 
return ADresults;

 

This concludes this short post. I hope you would be able to use it for your complex Active Directory scenarios.

 

 

0

C# – Generate Entity Framework SQL script

This one is going to be really short one. As It happens that I need to enable others to recreate DBs for API models I create I usually deliver them SQL script that does the work. So how do you generate one in VS ?

Well using package manager you just call

Update-Database -Script -SourceMigration:0

 

And that creates you SQL script – of course without any seed data 🙂

0

Powershell with Azure – deploy IaaS with Azure Resource Manager

Thesedays managing cloud should be somethining that is well automated and what you can be really comfortable with. Microsoft Azure when using Azure Resource Manager allows you to manage infrastructure via APIs or via Powershell (which is calling webApis then ).

I have to say that both of the approaches are quite nice. I have already worked some time ago on ARM json templates ( using Visual Studio Addon ) and they enable you to perform advanced operations in declarative way.

Good news is that we can also do that with Powershell as well. I’m aware that all over internet you can find ready scripts that will do deployments with a click of a button 🙂 but it goes about excercising 🙂 as thats how we learn.

First what you should make sure of is that you have Azure Powershell module installed. For now I always have been using WebPlatform installer . Once installed you should have it listed when query for modules

PS S:\get-module *azure* -listavailable
ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   0.9.8      Azure                               {Disable-AzureServiceProjectRemoteDesktop, Enable-AzureSer...

 

With the above being prerequisite we can continue and go further with our excercise. Our target will be to deploy 4 virtual machines. First of them will become a domain controller and should have static IP address. Remaining will be using dynamic addresses. Also we will not be creating availability groups and we will only have one public IP address ( we will investigate different setup in one of next posts ) which will expose 3389 port for us ( we will be restricting that via security groups altough )

Of course I dont have to remind you that for this you need valid Azure subscription ( but I assume you have one – even trial 🙂  ). The script as a whole is available via github and will be linked by the end of this post.

 

General setup

First we start of with setting up our azure subscription credentials and defining subscription details and amoount of VMs to be created.  Here we start off with getting our credentials (if we would use Azure AD and delegate credentials to newly created user we could pass PScredential object as argument ) . Later on we select one of available subscriptions (we can use out-grid to give enduser option of selecting ).

 

 

#region Setup subscription and script variables

## Get up your credentials (only if using Azure AD )
# credentialSubscription = Get-Credential 
# Add-AzureAccount -Credential 

## Enable debugging
$DebugPreference ='Continue'


## Login into your subscription
Add-AzureAccount 

# using grid view select a subscription - in scripted scenario you will have the ID supplied to this script 
$subscriptionId =  (Get-AzureSubscription |Out-GridView -Title 'Choose your subscription' -PassThru).SubscriptionId

# Since user can click cancel if we dont have a subscriptionId we quit - again only for non automated scenario 
if ( [string]::IsNullOrEmpty( $subscriptionId ) ) { return }

# If you have more than 1 subscription associated it might be handy to choose current :) 
Select-AzureSubscription -SubscriptionId $subscriptionId -Current

## Switch to ARM - be aware that this functionality will be deprecated 
## https://github.com/Azure/azure-powershell/wiki/Deprecation-of-Switch-AzureMode-in-Azure-PowerShell 
Switch-AzureMode AzureResourceManager

## Check available locations 
$azureLocations= Get-AzureLocation 

## Check which locations we have available 
($azureLocations | Where-Object Name -eq ResourceGroup).Locations

## Select one location (we could use the grid view however I will deploy in West Europe)
$selectedAzureLocation = 'West Europe'

## Check registrered providers - also useful when looking for resources 
Get-AzureProvider

## Define resources prefix that we will use across this script 
## If this would be coming from variable outside of script we would need to make sure its lowercase 
$rscprefix = ('armdeploy').ToLower()

## Create tags to be used later 
$tags = New-Object System.Collections.ArrayList
$tags.Add( @{ Name = 'project'; Value = 'armdeploy' } )
$tags.Add( @{ Name = 'env'; Value = 'demo' } )

## Another way to create tags 
$tags = @( @{ Name='project'; Value='armdeploy' }, @{ Name='env'; Value='demo'} )

## Number of VMs to create 
$VMcount=4

#endregion

 

What is the most important here is the part of switching azure operations mode done with :

Switch-AzureMode AzureResourceManager

This command have been deprecated and will not be available in future! Please take a look at post here for more detailed information!

And since I like to be in control of whats going on I tend to change output to be more verbose on debug side. This is done easily bu specyfying :

## Enable debugging
$DebugPreference ='Continue'

 

Create resource group

Within Azure nowadays we have concept of resource groups which are form of “containers” for keeping resources related to each other together. So if we want to create new obects we must start with resource group. Creating of it its quite easy.

#region Create Azure Resource Group

## Prepare resource group name 
$rscgName = "rg-${rscprefix}"

## Check if our resource group exists 
If (!(Test-AzureResourceGroup -ResourceGroupName $rscgName))
    {
        # Does not exist - create it - also set up deployment name
        $resourceGroup = New-AzureResourceGroup -Name $rscgName -Location $selectedAzureLocation -Tag $tags -DeploymentName "deploy-${rscgName}"
    }
else
    {
        # Exists - get the resource by resource group name
        $resourceGroup = Get-AzureResourceGroup -Name $rscgName
    }


#endregion

 

Through the rest of the post you will see me checking for resources using Test-<typeOfResource> however looking at gitHub shows that some of those are depracated as well. So it migth be that this part will require a bit of rework.

Create storage account

In order to store OS and data disks we must have object within azure. And here we can utilize Azure Storage accounts. For this we create account – but in real life scenario you would just go ahead and use existing one for example.

#region Create Azure Storage Account

# Since we need to store the virutal machine data somewhere we need storage account 
# we can script in a way that if it does not exist it will be created within our resource group

## Storage account name 
$saName="sa${rscprefix}"


## handy method to find valid types for storage account type

# First we get our command 
$gc= get-command New-AzureStorageAccount

# Then we navigate to property holding attributes which are TypeId of System.Management.Automation.ValidateSetAttribute 
# so it is clearly visible that those will be validate set values 
$gc.Parameters['Type'].Attributes.ValidValues


# Based on the above lets choose a type for the storage account 
$saType = 'Standard_LRS'


## Here we will check if we have storage account within our resource group
if (!(Test-AzureResource -ResourceName $saName -ResourceType 'Microsoft.Storage/storageAccounts' -ResourceGroupName $resourceGroup)) 
    {
        # No storage account so lets go ahead and create it based on parameters we have above
        $sa = New-AzureStorageAccount -Name $saName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -Type $saType

    }
else
    {
        # Storage account exists - lets grab its resource 
        $storageAccount = Get-AzureStorageAccount -ResourceGroupName $resourceGroup -Name $saName
    }


## Once this is completed lets set subscription storage account since we do have ID already 
Set-AzureSubscription -SubscriptionId $subscriptionId -CurrentStorageAccountName $saName


#endregion

 

Create virtual networks

In order to have networking running properly you need network. I really like concept of virtual networks and subnets and with this connecting directly with network interfaces and other objects things start to make sense – it all interconnects 🙂

#region Create Azure Virtual Network

$vnetName = "vnet-${rscprefix}"


$vNetsubnet1Name = "${rscprefix}-subnet1"

$vNetsubnet2Name = "${rscprefix}-subnet2"

# Create Virtual Network if it doesn't exist
if (!(Test-AzureResource -ResourceName $vnetName -ResourceType 'Microsoft.Network/virtualNetworks' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
{
    
    # Create first subnet 
    $vSubnet1 = New-AzureVirtualNetworkSubnetConfig -Name $vNetsubnet1Name -AddressPrefix '10.0.1.0/24'

    # Create second subnet
    $vSubnet2 = New-AzureVirtualNetworkSubnetConfig -Name $vNetsubnet2Name -AddressPrefix '10.0.2.0/24'

    # Create virtual network
    $vNetwork = New-AzureVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AddressPrefix '10.0.0.0/16' -Subnet $vSubnet1, $vSubnet2 -Tag $tags

} 
else 
{

    # retrieve virtual network if already exists
    $vNetwork = Get-AzureVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup.ResourceGroupName

}

#endregion

You can see in above that I create 2 subnets. Altough I could get away with one – next one we might use in upcoming posts

 

Create public IP address

As mentioned before – I’m after a really simple set up here. So I will just create single public IP address (and make sure it is resolvable with DNS ) which I will be using later to connect to VMs

#region Create Azure Public IP address

$publicIPname = "pip-${rscprefix}" # PublicIP => pip

$publicIPdns = "dns-${rscprefix}"  # this will be our DNS name


## Here we check pi
$retryCntDns = 0

do
{
    $retryCntDns++
    $publicIPdns="dns-${rscprefix}-${retryCntDns}"
    $domainAvailable = ( Test-AzureDnsAvailability -DomainQualifiedName $publicIPdns -Location $selectedAzureLocation )
}
while(!$domainAvailable -or $retryCntDns -eq 3  ) 

# Check if we have our resource already existing 
if (!(Test-AzureResource -ResourceName $publicIPname -ResourceType 'Microsoft.Network/publicIPAddresses' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
{
    If (!$domainAvailable)
    {
        # we dont have public domain available here - we create without DNS entry
        $publicIp = New-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AllocationMethod Dynamic -Tag $tags
    }
    else
    {
        # We do have dns available - lets create it with DNS name
        $publicIp = New-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AllocationMethod Dynamic -DomainNameLabel $publicIPdns -Tag $tags
    }
    

} 
else 
{
    # Seems like we have already public IP address so we can just go ahead and retrieve it
    $publicIp = Get-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName
}

#endregion

 

Create network security group

To provide security we can now define ACLs on objects like subnets / network interfaces which allows us to have granular security. Below I will just create one for remote desktop access (in this example allow from any destination – which is not a good thing in production )

#region Create Network Security Group & Rules

# Define unique name for NSG resource
$nsgName = "nsg-${rscprefix}"


if (!(Test-AzureResource -ResourceName $nsgName -ResourceType 'Microsoft.Network/networkSecurityGroups' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
{

    # Create RDP access rule (at the script allow from everywhere - you should investigate this for your environment security)
    $nsgRule_RDP = New-AzureNetworkSecurityRuleConfig `
        -Name 'allow-in-rdp' `
        -Description 'Allow Remote Desktop Access' `
        -SourceAddressPrefix * `
        -DestinationAddressPrefix * `
        -Protocol Tcp `
        -SourcePortRange * `
        -DestinationPortRange 3389 `
        -Direction Inbound `
        -Access Allow `
        -Priority 100

    # Create Network Security Group with Rule above 
    $nsg = New-AzureNetworkSecurityGroup -Name $nsgName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SecurityRules $nsgRule_RDP -Tag $tags

} 
else 
{
    # Get NSG if already created
    $nsg = Get-AzureNetworkSecurityGroup -Name $nsgName -ResourceGroupName $resourceGroup.ResourceGroupName
}

#endregion

 

Create network interfaces

Now to all connect to each other we create network interfaces. To first interface we will add additionaly our public IP address

#region Define network interfaces 

$networkInterfaces = @() # We will use this array to hold our network interfaces

# For each VM we will create a network interface

for ($count = 1; $count -le $VMcount; $count++) 
{

    $nicName = "${rscprefix}-nic${count}"

    if (!(Test-AzureResource -ResourceName $nicName -ResourceType 'Microsoft.Network/networkInterfaces' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
    {

        $nicIndex = $count – 1
        
        # The first VM will be our domain controller/DNS and it needs static IP address 
        if ($count -eq 1)
        {
            $networkInterfaces += New-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SubnetId $vNetwork.Subnets[0].Id -NetworkSecurityGroupId $nsg.Id -IpConfigurationName 'ipconfig-dc01' -PrivateIpAddress 10.0.1.4 -PublicIpAddressId $publicIp.Id
        }
        else
        {
            $networkInterfaces += New-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SubnetId $vNetwork.Subnets[0].Id -NetworkSecurityGroupId $nsg.Id
        }
         

    } 
    else 
    {
        # retrieve existing
        $networkInterfaces += Get-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName
    }

}

#endregion

 

Provision VMs

And now the time has come to finally provision virtual machines based on resources we got prepared for this.

#region Provision virtual machines


## If you would like to you could use those to present enduser (or yourself) with visual option to choose publisher/offer and SKU 
## as this is scripted version example we will use hardcoded values 

#$publisherName = ( Get-AzureVMImagePublisher -Location $selectedAzureLocation ).PublisherName | Out-GridView -Title 'Select a VM Image Publisher ...'  -PassThru
#$offerName = ( Get-AzureVMImageOffer -PublisherName $publisherName -Location $selectedAzureLocation ).Offer | Out-GridView -Title 'Select a VM Image Offer ...' -PassThru
#$skuName = ( Get-AzureVMImageSku -PublisherName $publisherName -Offer $offerName -Location $selectedAzureLocation ).Skus |Out-GridView -Title 'Select a VM Image SKU' -PassThru

$publisherName = 'MicrosoftWindowsServer'
$offerName='WindowsServer'
$skuName='2016-Technical-Preview-3-with-Containers'

# Take latest version
$version = 'latest'

# We will use basic version of VMs - later we will be able to provision it further
$vmSize = 'Basic_A1'

# Get credentials for admin account - you may want to modify username
$vmAdminCreds = Get-Credential Adminowski -Message 'Provide credentials for admin account'

# array to hold VMs
$vm = @()

# Create VMs
for ($count = 1; $count -le $VMcount; $count++) 
{ 
    
    # create suffixed VM name
    $vmName = "vm-${count}"

    # Check if resource already exists
    if (!(Test-AzureResource -ResourceName $vmName -ResourceType 'Microsoft.Compute/virtualMachines' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
    {
        
        $vmIndex = $count - 1

        $osDiskLabel = 'OSDisk'
    
        $osDiskName = "${rscprefix}-${vmName}-osdisk"

        $osDiskUri = $sa.PrimaryEndpoints.Blob.ToString() + "vhds/${osDiskName}.vhd"

        $dataDiskSize = 200 # Size in GB

        $dataDiskLabel = 'DataDisk01'

        $dataDiskName = "${rscprefix}-${vmName}-datadisk01"

        $dataDiskUri = $sa.PrimaryEndpoints.Blob.ToString() + "vhds/${dataDiskName}.vhd"

        $vmConfig =  New-AzureVMConfig -VMName $vmName -VMSize $vmSize | `
            Set-AzureVMOperatingSystem `
                -Windows `
                -ComputerName $vmName `
                -Credential $vmAdminCreds `
                -ProvisionVMAgent `
                -EnableAutoUpdate |
            Set-AzureVMSourceImage `
                -PublisherName $publisherName `
                -Offer $offerName `
                -Skus $skuName `
                -Version $version |
            Set-AzureVMOSDisk `
                -Name $osDiskLabel `
                -VhdUri $osDiskUri `
                -CreateOption fromImage |
            Add-AzureVMDataDisk `
                -Name $dataDiskLabel `
                -DiskSizeInGB $dataDiskSize `
                -VhdUri $dataDiskURI `
                -CreateOption empty |
            Add-AzureVMNetworkInterface `
                -Id $networkInterfaces[$vmIndex].Id `
                -Primary

        New-AzureVM `
            -VM $vmConfig `
            -ResourceGroupName $resourceGroup.ResourceGroupName `
            -Location $selectedAzureLocation `
            -Tags $tags

    }
    else
    {
        # Get the VM if already provisioned
        $vm += Get-AzureVM -Name $vmName -ResourceGroupName $resourceGroup.ResourceGroupName
    }


}

#endregion

 

Look at completed action

Once the whole script completes we get direct access to our newly created reources . It looks good and is a worth noting starting point for autoation and orchestration. From here next logical step is to describe how this infrastructure should be configured with DSC and that is something we will do with on of our next posts.

 

2015-09-29_11h30_31

 

Happy powershelling 🙂

 

 

 

Script in full

2

ASP.NET 5 – Dependency injection with AutoFac

Today we will shift a bit from previous tracks in order to research more on Visual Studio 2015 powering us with MVC6 / ASP.NET 5 . I personally find that Microsoft is going into right direction – especially being so open source.

But coming back to original subject of this post. When you create a new project in VS2015 and select .Net5 we can see that this is till being in preview – therefore it might be that information provided to you in this post are already out of date! Therefore I recommend you do take it under account.

For .Net 5 documentation look here . And if you are more interested in Autofac check documentation here

Startup.cs

        public IServiceProvider ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();

            //create Autofac container build
            var builder = new ContainerBuilder();

            //populate the container with services here..
            builder.RegisterType<DemoService>().As<IProjectDemo>();
            builder.Populate(services);

            //build container
            var container = builder.Build();

            //return service provider
            return container.ResolveOptional<IServiceProvider>();
        }

 

Project.json

  "dependencies": {
    "Autofac": "4.0.0-beta6-110",
    "Autofac.Framework.DependencyInjection": "4.0.0-beta6-110",
    "Microsoft.AspNet.Mvc": "6.0.0-beta6",
    "Microsoft.AspNet.Server.IIS": "1.0.0-beta6",
    "Microsoft.AspNet.Server.WebListener": "1.0.0-beta6",
    "Microsoft.AspNet.StaticFiles": "1.0.0-beta6"

  },

 

What I also learned at this stage – it is not smart to mix different beta versions. So if possible try to keep them on the same level. Hope this helps and will get you going!

 

We will be defenitely visiting autofac in later posts when we will play around with creating REST services or other apps!

1

Docker on Windows – Running Windows Server 2016

So without any further delays we go ahead and create our environment to play around with containers however this time we will do it on windows!

As you know with release of Windows Server 2016 TP3 we have now the ability to play around with containers on Windows host. Since Docker is under heavy development it is possible that a lot will change in RTM therefore check out for any updates to this post 😀

If you are happy automating admin like I’m probably one of the first commands you run would be ….

powershell.exe

… 🙂 of course – the windows engineer best friend !

 

To get this show stared I’m using Windows Server 2016 TP3 on Azure as that gives the biggest flexibility. Microsoft have posted already some good points how to get started using Docker. That documentation (or more less technical guide ) is available here. It explains how to quickly get started.

So we start off by logging into our Windows host and starting off powershell session :

ws2016tp3_ps1

 

Cool thing about which I wasn’t aware is syntax highlighting (something that ppl on unix had for a while 🙂 ) which makes working with PS and its output more readable (in my opinion)

So as mentioned in my previous post you have option to manage containers with Docker (as we now know it on ubuntu i.e. ) or with PowerShell. Since I have been working with with Docker already I decided to investigate that route and leave powershell for a bit later .

Following documentation which I have linked above we can see that Microsoft have been really good and prepared for us script which will take of initial configuration and download of all necessary docker tools.

In order to download it we need to execute the following command :

wget -uri http://aka.ms/setupcontainers -OutFile C:\ContainerSetup.ps1

If you would rather to get to the source of the script its available here

Once downloaded you can just start the script and it will take care of required configuration and download of the images. Yes … downloading of those images can take a while. Its approximately ~18GB of data which have been downloaded. So you may want to start the configuration before your favourite TV show or maybe a game of football in park 😀

Once completed we have access to the goodies – we can start playing with Docker. First thing which is good to do is to check out our Docker information … easily done by

Docker info

In my case the output is following :

ws2016tp3_docker_info

 

Out of my head what is definitely worth of investigating is logging driver ( when used a bit differently allows you to throw Docker logs to centralised system i.e. ElasticSearch … but about that a bit later 😀 ). Rest we will investigate along with this learning series of Docker on Windows.

Now what would a Docker be without images! After running that long time taking configuration process we get access to windows images prepared for us. If you have not yet been playing around with those then you can get them by issuing

docker images

With that we get the available images :

ws2016tp3_docker_images

First thing to notice is that approximate size of default image is ~9,7GB which is a question in this days – is it a lot ? I think you need to answer this question by yourself 🙂 or by waiting for MS to provide a bit more details (unless those our out and I haven’t found them 🙂 ). At the moment with my experience with Docker on Ubuntu – set up of Linux host and containers is a matter of minutes. So that GB of data on Windows might be a bit of show stopper on throwing Windows hosts for Docker.

Now since we have our image it might be useful to get more detailed information. We can get them by issuing command

docker inspect <Container Id> | <Image Id>

The results are following :

[
{
    "Id": "0d53944cb84d022f5535783fedfa72981449462b542cae35709a0ffea896852e",
    "Parent": "",
    "Comment": "",
    "Created": "2015-08-14T15:51:55.051Z",
    "Container": "",
    "ContainerConfig": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": null,
        "PublishService": "",
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": null,
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "VolumeDriver": "",
        "WorkingDir": "",
        "Entrypoint": null,
        "NetworkDisabled": false,
        "MacAddress": "",
        "OnBuild": null,
        "Labels": null
    },
    "DockerVersion": "1.9.0-dev",
    "Author": "",
    "Config": null,
    "Architecture": "amd64",
    "Os": "windows",
    "Size": 9696754476,
    "VirtualSize": 9696754476,
    "GraphDriver": {
        "Name": "windowsfilter",
        "Data": {
            "dir": "C:\\ProgramData\\docker\\windowsfilter\\0d53944cb84d022f5535783fedfa72981449462b542cae35709a0ffea89
6852e"
        }
    }
}
]

 

So here we go – we will create our first container by running the following command . It will give us regular output and will run in background.

docker run -d --name firstcontainer windowsservercore powershell -command "& {for (;;) { [datetime]::now ; start-sleep -s 2} }

It gives regular output. In order to see what container is outputting you can issue command

docker logs <container Id>|<container name>

 

Thats all fine … but how do we do any customisations to our container ? The process is fairly simple we run a new container and make our changes. Once we are happy with changes we have implemented we can commit the changes and save our image. We will quickly explore this by creating container which would host our IIS web server.

We begin with creating a new container and entering interactive session

docker run -it --name iisbase windowsservercore powershell

Once your container is up we are directly taken to powershell session within that container. We will use the well known way to get base image configured. What we are after here is adding web server role using PS. First lets check if its definitely not installed :

Get-WindowsFeature -Name *web*

ws2016tp3_docker_imagecust01

After that we will just add the web server role and then exit container. Lets issue the command for installation of role :

PS C:\> Add-WindowsFeature -Name Web-Server

ws2016tp3_roleinside_docker

 

Before we exit there is something which is worth of mentioning … speed of containers (at least at the moment of writing this blog where ppl at MS are still working on it 🙂 ). It can be significantly improved by removing anti malware service from you base image . This can be done by running the following command :

Uninstall-WindowsFeature -Name Windows-Server-Antimalware

 

Now we can exit our container by simply typing

exit

Small thing worth of mentioning 🙂 clipboard string content paste into containers have been limited to ~50 characters which is a work in progress and should be lifted up in next releases.

 

Ufff so we got to the point where our container have been configured it. Its time to build image from it. This can be done (at the moment ) only on containers which are stopped. To execute commit run :

# docker commit <container Id> | <contaner Name> <RepoName> 
docker commit 5e5f0d34988a rafpe:iis 

 

The process takes a bit of time however once completed we have access to our new image which allows us to spin multiple containers 🙂 If you would like to evaluate the image created you can use the approach and commands discussed earlier in this post.

ws2016tp3_docker_commitedimage

 

And as you can see our new image rafpe is slightly bigger than the base image (this is due changes we made).

Let’s go ahead and do exactly what we waited for – spin a new container base on this image

docker run -d --name webos -p 80:80 rafpe:iis

Now at the moment of writing I could not get connected on exposed port 80 from container-host by issuing something in lines of

curl 127.0.0.1:80

According to information which I have found on MSDN forums people are experiencing the same behaviour . Which means (if enabled on firewall ) you can get from external systems to your container exposed port (check if you have NAT correctly set up ).

 

Now to add something useful if you would like to try different approach for this exercise with Docker. To find different images use the following command :

docker search iis

Uff – I think that information should get you going and please be advised that this is a learning course for me as well 🙂 so if I made some terrible misleading in information here please let me know and  I will update that.

To not leave without pointing you to some good places here are links used by me for this :

 

Hope you liked the article! Stay tuned for more!

 

 

0

Rise of Docker – the game changer technology

If you have not yet been working with Docker containers – worse… if you have not yet really hard about Docker and significant changes it brings – then you should find more information!

In simple words Docker does the thing we always wanted – it isolates applications from our hosts layer. This enables possibility of creating micro services that can be dynamically scaled / updated / re-deployed !

If you would like to imagine how this whole Docker works then I’m sure that by looking on image below you will get the grasp of the idea behind!

DockerWithWindowsSrvAndLinux

 

 

So things to keep in mind are that Docker is not some black magic box … it requires underlying host components to run on top of it. What I mean by that If you need to run Windows containers you will need Windows host and the same principal will apply for the Linux container. You will also need Linux Host.

Up to this point there was no real support on Docker containers on Windows. However by the time of writing this document Microsoft has released Windows Server 2016 which brings major breaking changes and primarly of our interest the support to containers!

One of the thing that Microsoft have made people aware is that you will be able to  manage containers with Docker and with Powershell … but …. yep there is a but – containers created with one cannot be managed with other one . I think thats a fair trade of but thats something that potentially is going to be changed.

 

In the meantime I invite you to explore docker hub and help yourself by getting more detailed information when exploring the docker docs 

In one of the next posts we will discuss how to get Windows docker container running on Windows Server 2016 (TP3 ) !  With that keep intro to Docker I hope to see you again!

 

 

0

Powershell – build complex module with functions dependency tree

logo-powershell-520x245Hey! So the time has come to share with you one of my recent achievements within automation. As you may have noticed in subejct of the post we will be focusing on complex functions with dependencies on onther functions within module. If that would not be enough … we will execute them remotely within session. Isnt that just uber cool ?

So in my last post I focuses on brief intro to what exporting of functions could look like. Today we start off with definition how we set up our functiosn and what we will use to build dependencies ( of course keeping it all under control )

How do you build complex functions ?

This might sound trivial however it is quite important to get this idea before you go ahead and build a module with 30 cmdlets and 3000 lines of code. From my DevOps experience I can tell you that wherever possible I start building from generic functions which then I use in more specific functions (a form of wrappers level above )

I think if we were to think of visualisation we could say we would get something in lines of :

2015-08-11_11h56_49

 

Someone just looking at it could say it looks really promising and professional 😀 Well it does. It is all about low level functions (the generic ones) to work more less with RAW data objects without performing complex validations. This validations and remaining high level functionality would be done by high level functions. 

 

Sample code which I could use for executing such dependent functions could look as follow :

                # Required functions on the remote host
                $exportNewObjGeneric  = Export-FunctionRemote New-ObjGeneric                                                                                                                                             
                $exportAddProps       = Export-FunctionRemote Add-ObjProperties 
                $exportGetAdSite      = Export-FunctionRemote Get-MyAdSite

                $remoteFunctions = @($exportGetAdSite, $exportNewObjGeneric, $exportAddProps)

Invoke-Command -Session $Session -ScriptBlock {   
            
                    # we recreate each of required functions
                    foreach($singleFunction in $using:remoteFunctions)
                    {
                        . ([ScriptBlock]::Create($singleFunction));
                    }
                 # ---- some other code doing magic -----
}

 

 

Great! Since it sounds so easy to implement … where is the catch …. 😀 Of course there is one … look what potentially could happen if you just go ahead and start referencing functions within a module ….

2015-08-11_15h59_13

 

It will not take a long time when you will lose track of what references what and where are your dependencies. This is just asking for trouble as you will practically not be able to assert what will be consequences of yoru changes on one end. I can guarantee you would get a butterfly effect in this scenario!

You will quickly lose ability to properly manage the code and automation will become more than a nightmare than a pleasure!

 

I have seen that to – however a bit differently. I though we could utilize something that every function of mine has – code based help!

Look at the example of function below – for purposes of this post I have ammended the code based help

2015-08-11_13h22_06

 

 

Have you noticed anything special ? …. Well if you didnt let me explain ….

 

Reference functions which are dependencies

Yes! This is the road to ultimate automation. I have came up with idea which can be describe as following :

  • Functions can nest (have a dependency  )  only on 1 level – so not dependency in dependency in dependency (but maybe you will come up with some more elegant way to overcome this 😀 )
  • Generic functions should not have dependencies on custom functions

With those 2 points I was able to continue with this solution. Therefore I have ammended a bit of the code we used last time and came up with the following :

 

Now this functions references one of which one already discussed called ‘Export-FunctionRemote’ (available @ Github ).

So what we got from the above ? Well we got something really great. In a controlled way by decorating our function with commented based help and specyfing  RequiredFunction<Some-FunctionName> it will be cosnidered as dependency in out script.

    <#
        .SYNOPSIS
        Do xyz 

        .FUNCTIONALITY
        RequiredFunction<Get-NobelPrice>
    #>

 

Finally use example

So we need to use what we have just received. I wont be taking long to explain – this is the pure awesomness of automation 🙂 …

                # Aqquire function name
                $functionName = $MyInvocation.MyCommand
             
                # get remote functions into remote script block
                Write-Verbose "Exporting functions for $functionName"
                $remoteFunctions = Get-RemoteRequiredFunctions -functionName                    $functionName

               

                Invoke-Command -Session $Session -ScriptBlock {   
            
                    # we recreate each of required functions
                    foreach($singleFunction in $using:remoteFunctions)
                    {
                        . ([ScriptBlock]::Create($singleFunction));
                    }

                    # ---- do magic -----
                }

 

 

Summary

I hope you like the idea of automating yoru functions in generic way to work with Raw data and then use high level functions to really utilize their potential. Given that you have also now received way to perform advanced operations by creating function dependencies.

Of course this is more than extendible – you can buld dependency trees – do more complex unit testign with Pester … there is not limtis 😀

Got feedback / issue ? As usual leave a comment or jump to GitHub / Gists