1

Azure Files on Ubuntu

If you have not seen recent post on Azure blog , then I would like to let you know that Azure Files are now GA. Details of this blog entry are available here.

Since I would not like to make duplicate of content I’m going to show you how you can get the Azure File share mapped on your linux boxes. Why linux boxes ? I already have tryzylion ideas of usage for this – major one is Docker and containers which I would like to make HA or my own Docker repository.

 

Creation of files via portal is extremly easy and intuitive

azure_files_firstview

 

Install tools

We need to install the following package if not alredy present ( I become a fan of ubuntu 🙂 :

sudo apt-get install cifs-utils

 

Mount fileshare

Then next step is mounting the share. This has some limitations based on SMB protocol version being used (for more detailed info look into the mentioned azure blog post link ) .  I will be using in this instance SMB v3 so we are good to go on using AF on premises.

sudo mount -t cifs //rafpeninja.file.core.windows.net/docker-demo-data ./dockerdemodata -o vers=3.0,username=rafpeninja,password=YourAwesomeStorageKey==,dir_mode=0777,file_mode=0777

 

As I did not want to play yet with any restrictions the permissions are kind high 🙂 but you can modify them as you need

 

Simple test

Once this is done you can head to the folder and create a sample file.

sudo touch test.me

 

When done you can see that file instantly via the portal

azure_file_test_ok

 

 

And here you go – your file is immiediately available. If you got any scenarios where you already use those I’m keen to hear about it !

 

 

 

0

Pester – Introduction to test driven development (TDD) for Powershell

tdd-logo-01Today I wanted to start a series on Pester for PowerShell. If you have not heard about it before you might find it quite interesting. It allows you to write code and test it alongside.

Life example why would this be useful ? Nothing easier to do – imagine complex functions executing chained operations. Making small modification to one piece might not have any drawbacks in the whole operation … but are you sure ? It might appear that this small modification was used somewhere down along the chain and initially impact would not be seen.

And this is where Pester comes to the rescue! By getting into habit of writing in this way you will save yourself from butterfly effects. I can assure that thanks to this approach I was able to avoid several situation where exactly small changes without visible impact would break a lot of things 🙂

Get familiar

Pester is actively developed on Github and you can head to project page . I can recommend to check out Wiki page and open issues as those 2 are extremly useful sources of information.

 

Install Pester

Well there is not much to say 🙂 With new and shiny Powershell it cannot be simpler than :

Pester_install

 

 

And that was it – you are now set up for your first test.

 

First test

In order to run simple test we will create our selves 2 files. One for our real function and one for tests. Pester makes it really easy and therefore we can use build in cmdlet to prepare those 2 for us :

New-Fixture -Name FirstFunctionTest

pester_first_func

 

Lets make dummy functions in our FirstFunctionTest.ps1 file. I will be really easy on this example 🙂

function FirstFunctionTest
{
        return 1
}

And now lets move to file FirstFunctionTest.Tests.ps1 and write the following

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "FirstFunctionTest" {
    It "returns value that is not null" {
        FirstFunctionTest | Should Not BeNullOrEmpty
    }

    It "returns value that is exactly 1" {
        FirstFunctionTest | Should be 1
    }
}

 

The majority of the code was prepared by Pester. At the moment I just defined that it must return value and not null and return value must be equal to 1. Great! Let’s run this simple test now

invoke-pester

And the results are instant 🙂

pester_first_functest

 

When a change was made

So now we will change our function to return something different – so in nutshell we will simulate fact that a change has been made that can have big impact 😀 
pester_first_functesterrorThanks to Pester you would immiediately see this 🙂

 

Summary

This is only a small example showing really the top of what Pester can do. In next posts we will be investigating much more complex scenarios. Stay tuned 🙂

 

 

5

Docker compose and ELK – setup in automated way

docker-compose-logo-01Altough originally this was supposed to be short post about setting up ELK stack for logging. However with every moment I have been working with this technology it got me really ‘insipired’ and  I thought it would be worth to start and make it working the right way from the very beggining

 

Now since we are up for automating things we wil try to make use of docker compose which will allow us to setup whole stack in automated way. Docker compose is detailed in here

Compose in short allows you to describe how your services will look like and how do they interact with each other (volumes/ports/links).

In this post we will be using docker + docker-compose on Ubuntu host running in Azure. If you would be wondering why I just show my IP addresses all the time on the screenshots … because those are not load balanced static IP addresses. So every time I spin a host I get a new one 🙂

 


This post contains information which have been updated in post

Docker compose and ELK – Automate the automated deployment

However for gettign idea of how solution works I recommend just reading through 🙂


 

 

Installing Docker-compose

So the first thing we need to do is to install docker-compose. Since as we all now docker is under constant development it is easiest to give you link to gitHub release page rather than direct link which can be out of date

Once installed you can use the following command to make sure it is installed :

docker-compose --version

 

Preparing folder structure

Since we will be using config files and storing elasticsearch data on the host we will need to setup folder structure. I’m aware that this can be done better with variables 🙂 but ubuntu is still learning curve so I will leave it up to you to find better ways 🙂 In the meantime let’s run the following command

sudo mkdir -p /cDocker/elasticsearch/data
sudo mkdir -p /cDocker/logstash/conf
sudo mkdir -p /cDocker/logstash/agent
sudo mkdir -p /cDocker/logstash/central
sudo mkdir -p /cDocker/compose/elk_stack

 

Clone configuration files

Once you have the folder structure we will prepare our config files. To do this we will be cloning gitHub repository (gists ) which I have prepared in advance (and tested as well of course ) .

git clone https://gist.github.com/60c3d7ff1b383e34990a.git /cDocker/compose/elk_stack

git clone https://gist.github.com/6627a2bf05ff956a28a9.git /cDocker/logstash/central/

git clone https://gist.github.com/0cd6594672ebfe1205a5.git /cDocker/logstash/agent/

git clone https://gist.github.com/c897a35f955c9b1aa052.git /cDocker/elasticsearch/data/

 

Since I keep a bit different names on github (this might be subject to change in future ) we need to rename them a bit 🙂 For this you can run following commands

mv /cDocker/compose/elk_stack/docker-compose_elk_with_redis.yml  /cDocker/compose/elk_stack/docker-compose.yml

mv /cDocker/elasticsearch/data/elasticsearch_sample_conf.yml /cDocker/elasticsearch/data/elasticsearch.yml

mv /cDocker/logstash/agent/logstash_config_agent_with_redis.conf /cDocker/logstash/conf/agent.conf

mv /cDocker/logstash/central/logstash_config_central.conf /cDocker/logstash/conf/central.conf

 

Docker compose file

If you look at the code file below you will notice that we define how our image will be build. What ports will be epxosed , what links will be created amongst containers. Thanks to that machines will be created in specific order and linked accordingly, And since we have already prepared configuration files the whole stack will be ready to go.

 

Execute orchestration

Now we have everything in place to set up our first run of orechestration. Our next step is just navigating to compose folder (where our docker-compose file is ) and running following command :

/cDocker/compose/elk_stack#: docker-compose up -d

This will execute pulling of all layers and in creating of services afterwards. Once completed you should see something similar to the following :

docker_compose_elk_stack_ready_01

 

 

Summary

Well and thats it folks! We of course have much more potential to do much more (using variables / labels etc ) however we will do more funky stuff in next posts. Since Azure Files is finally in production we will use it as persistent storage in one of our future posts so stay tuned.

On subject of ready to use ELK stack we will be looking into managing input based on logstash plugins and we will see on our own eyes how this Docker ELK stack will empower our IoT automations!

 

 

 

0

Powershell with Azure – deploy IaaS with Azure Resource Manager

Thesedays managing cloud should be somethining that is well automated and what you can be really comfortable with. Microsoft Azure when using Azure Resource Manager allows you to manage infrastructure via APIs or via Powershell (which is calling webApis then ).

I have to say that both of the approaches are quite nice. I have already worked some time ago on ARM json templates ( using Visual Studio Addon ) and they enable you to perform advanced operations in declarative way.

Good news is that we can also do that with Powershell as well. I’m aware that all over internet you can find ready scripts that will do deployments with a click of a button 🙂 but it goes about excercising 🙂 as thats how we learn.

First what you should make sure of is that you have Azure Powershell module installed. For now I always have been using WebPlatform installer . Once installed you should have it listed when query for modules

PS S:\get-module *azure* -listavailable
ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   0.9.8      Azure                               {Disable-AzureServiceProjectRemoteDesktop, Enable-AzureSer...

 

With the above being prerequisite we can continue and go further with our excercise. Our target will be to deploy 4 virtual machines. First of them will become a domain controller and should have static IP address. Remaining will be using dynamic addresses. Also we will not be creating availability groups and we will only have one public IP address ( we will investigate different setup in one of next posts ) which will expose 3389 port for us ( we will be restricting that via security groups altough )

Of course I dont have to remind you that for this you need valid Azure subscription ( but I assume you have one – even trial 🙂  ). The script as a whole is available via github and will be linked by the end of this post.

 

General setup

First we start of with setting up our azure subscription credentials and defining subscription details and amoount of VMs to be created.  Here we start off with getting our credentials (if we would use Azure AD and delegate credentials to newly created user we could pass PScredential object as argument ) . Later on we select one of available subscriptions (we can use out-grid to give enduser option of selecting ).

 

 

#region Setup subscription and script variables

## Get up your credentials (only if using Azure AD )
# credentialSubscription = Get-Credential 
# Add-AzureAccount -Credential 

## Enable debugging
$DebugPreference ='Continue'


## Login into your subscription
Add-AzureAccount 

# using grid view select a subscription - in scripted scenario you will have the ID supplied to this script 
$subscriptionId =  (Get-AzureSubscription |Out-GridView -Title 'Choose your subscription' -PassThru).SubscriptionId

# Since user can click cancel if we dont have a subscriptionId we quit - again only for non automated scenario 
if ( [string]::IsNullOrEmpty( $subscriptionId ) ) { return }

# If you have more than 1 subscription associated it might be handy to choose current :) 
Select-AzureSubscription -SubscriptionId $subscriptionId -Current

## Switch to ARM - be aware that this functionality will be deprecated 
## https://github.com/Azure/azure-powershell/wiki/Deprecation-of-Switch-AzureMode-in-Azure-PowerShell 
Switch-AzureMode AzureResourceManager

## Check available locations 
$azureLocations= Get-AzureLocation 

## Check which locations we have available 
($azureLocations | Where-Object Name -eq ResourceGroup).Locations

## Select one location (we could use the grid view however I will deploy in West Europe)
$selectedAzureLocation = 'West Europe'

## Check registrered providers - also useful when looking for resources 
Get-AzureProvider

## Define resources prefix that we will use across this script 
## If this would be coming from variable outside of script we would need to make sure its lowercase 
$rscprefix = ('armdeploy').ToLower()

## Create tags to be used later 
$tags = New-Object System.Collections.ArrayList
$tags.Add( @{ Name = 'project'; Value = 'armdeploy' } )
$tags.Add( @{ Name = 'env'; Value = 'demo' } )

## Another way to create tags 
$tags = @( @{ Name='project'; Value='armdeploy' }, @{ Name='env'; Value='demo'} )

## Number of VMs to create 
$VMcount=4

#endregion

 

What is the most important here is the part of switching azure operations mode done with :

Switch-AzureMode AzureResourceManager

This command have been deprecated and will not be available in future! Please take a look at post here for more detailed information!

And since I like to be in control of whats going on I tend to change output to be more verbose on debug side. This is done easily bu specyfying :

## Enable debugging
$DebugPreference ='Continue'

 

Create resource group

Within Azure nowadays we have concept of resource groups which are form of “containers” for keeping resources related to each other together. So if we want to create new obects we must start with resource group. Creating of it its quite easy.

#region Create Azure Resource Group

## Prepare resource group name 
$rscgName = "rg-${rscprefix}"

## Check if our resource group exists 
If (!(Test-AzureResourceGroup -ResourceGroupName $rscgName))
    {
        # Does not exist - create it - also set up deployment name
        $resourceGroup = New-AzureResourceGroup -Name $rscgName -Location $selectedAzureLocation -Tag $tags -DeploymentName "deploy-${rscgName}"
    }
else
    {
        # Exists - get the resource by resource group name
        $resourceGroup = Get-AzureResourceGroup -Name $rscgName
    }


#endregion

 

Through the rest of the post you will see me checking for resources using Test-<typeOfResource> however looking at gitHub shows that some of those are depracated as well. So it migth be that this part will require a bit of rework.

Create storage account

In order to store OS and data disks we must have object within azure. And here we can utilize Azure Storage accounts. For this we create account – but in real life scenario you would just go ahead and use existing one for example.

#region Create Azure Storage Account

# Since we need to store the virutal machine data somewhere we need storage account 
# we can script in a way that if it does not exist it will be created within our resource group

## Storage account name 
$saName="sa${rscprefix}"


## handy method to find valid types for storage account type

# First we get our command 
$gc= get-command New-AzureStorageAccount

# Then we navigate to property holding attributes which are TypeId of System.Management.Automation.ValidateSetAttribute 
# so it is clearly visible that those will be validate set values 
$gc.Parameters['Type'].Attributes.ValidValues


# Based on the above lets choose a type for the storage account 
$saType = 'Standard_LRS'


## Here we will check if we have storage account within our resource group
if (!(Test-AzureResource -ResourceName $saName -ResourceType 'Microsoft.Storage/storageAccounts' -ResourceGroupName $resourceGroup)) 
    {
        # No storage account so lets go ahead and create it based on parameters we have above
        $sa = New-AzureStorageAccount -Name $saName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -Type $saType

    }
else
    {
        # Storage account exists - lets grab its resource 
        $storageAccount = Get-AzureStorageAccount -ResourceGroupName $resourceGroup -Name $saName
    }


## Once this is completed lets set subscription storage account since we do have ID already 
Set-AzureSubscription -SubscriptionId $subscriptionId -CurrentStorageAccountName $saName


#endregion

 

Create virtual networks

In order to have networking running properly you need network. I really like concept of virtual networks and subnets and with this connecting directly with network interfaces and other objects things start to make sense – it all interconnects 🙂

#region Create Azure Virtual Network

$vnetName = "vnet-${rscprefix}"


$vNetsubnet1Name = "${rscprefix}-subnet1"

$vNetsubnet2Name = "${rscprefix}-subnet2"

# Create Virtual Network if it doesn't exist
if (!(Test-AzureResource -ResourceName $vnetName -ResourceType 'Microsoft.Network/virtualNetworks' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
{
    
    # Create first subnet 
    $vSubnet1 = New-AzureVirtualNetworkSubnetConfig -Name $vNetsubnet1Name -AddressPrefix '10.0.1.0/24'

    # Create second subnet
    $vSubnet2 = New-AzureVirtualNetworkSubnetConfig -Name $vNetsubnet2Name -AddressPrefix '10.0.2.0/24'

    # Create virtual network
    $vNetwork = New-AzureVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AddressPrefix '10.0.0.0/16' -Subnet $vSubnet1, $vSubnet2 -Tag $tags

} 
else 
{

    # retrieve virtual network if already exists
    $vNetwork = Get-AzureVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup.ResourceGroupName

}

#endregion

You can see in above that I create 2 subnets. Altough I could get away with one – next one we might use in upcoming posts

 

Create public IP address

As mentioned before – I’m after a really simple set up here. So I will just create single public IP address (and make sure it is resolvable with DNS ) which I will be using later to connect to VMs

#region Create Azure Public IP address

$publicIPname = "pip-${rscprefix}" # PublicIP => pip

$publicIPdns = "dns-${rscprefix}"  # this will be our DNS name


## Here we check pi
$retryCntDns = 0

do
{
    $retryCntDns++
    $publicIPdns="dns-${rscprefix}-${retryCntDns}"
    $domainAvailable = ( Test-AzureDnsAvailability -DomainQualifiedName $publicIPdns -Location $selectedAzureLocation )
}
while(!$domainAvailable -or $retryCntDns -eq 3  ) 

# Check if we have our resource already existing 
if (!(Test-AzureResource -ResourceName $publicIPname -ResourceType 'Microsoft.Network/publicIPAddresses' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
{
    If (!$domainAvailable)
    {
        # we dont have public domain available here - we create without DNS entry
        $publicIp = New-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AllocationMethod Dynamic -Tag $tags
    }
    else
    {
        # We do have dns available - lets create it with DNS name
        $publicIp = New-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AllocationMethod Dynamic -DomainNameLabel $publicIPdns -Tag $tags
    }
    

} 
else 
{
    # Seems like we have already public IP address so we can just go ahead and retrieve it
    $publicIp = Get-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName
}

#endregion

 

Create network security group

To provide security we can now define ACLs on objects like subnets / network interfaces which allows us to have granular security. Below I will just create one for remote desktop access (in this example allow from any destination – which is not a good thing in production )

#region Create Network Security Group & Rules

# Define unique name for NSG resource
$nsgName = "nsg-${rscprefix}"


if (!(Test-AzureResource -ResourceName $nsgName -ResourceType 'Microsoft.Network/networkSecurityGroups' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
{

    # Create RDP access rule (at the script allow from everywhere - you should investigate this for your environment security)
    $nsgRule_RDP = New-AzureNetworkSecurityRuleConfig `
        -Name 'allow-in-rdp' `
        -Description 'Allow Remote Desktop Access' `
        -SourceAddressPrefix * `
        -DestinationAddressPrefix * `
        -Protocol Tcp `
        -SourcePortRange * `
        -DestinationPortRange 3389 `
        -Direction Inbound `
        -Access Allow `
        -Priority 100

    # Create Network Security Group with Rule above 
    $nsg = New-AzureNetworkSecurityGroup -Name $nsgName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SecurityRules $nsgRule_RDP -Tag $tags

} 
else 
{
    # Get NSG if already created
    $nsg = Get-AzureNetworkSecurityGroup -Name $nsgName -ResourceGroupName $resourceGroup.ResourceGroupName
}

#endregion

 

Create network interfaces

Now to all connect to each other we create network interfaces. To first interface we will add additionaly our public IP address

#region Define network interfaces 

$networkInterfaces = @() # We will use this array to hold our network interfaces

# For each VM we will create a network interface

for ($count = 1; $count -le $VMcount; $count++) 
{

    $nicName = "${rscprefix}-nic${count}"

    if (!(Test-AzureResource -ResourceName $nicName -ResourceType 'Microsoft.Network/networkInterfaces' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
    {

        $nicIndex = $count – 1
        
        # The first VM will be our domain controller/DNS and it needs static IP address 
        if ($count -eq 1)
        {
            $networkInterfaces += New-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SubnetId $vNetwork.Subnets[0].Id -NetworkSecurityGroupId $nsg.Id -IpConfigurationName 'ipconfig-dc01' -PrivateIpAddress 10.0.1.4 -PublicIpAddressId $publicIp.Id
        }
        else
        {
            $networkInterfaces += New-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SubnetId $vNetwork.Subnets[0].Id -NetworkSecurityGroupId $nsg.Id
        }
         

    } 
    else 
    {
        # retrieve existing
        $networkInterfaces += Get-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName
    }

}

#endregion

 

Provision VMs

And now the time has come to finally provision virtual machines based on resources we got prepared for this.

#region Provision virtual machines


## If you would like to you could use those to present enduser (or yourself) with visual option to choose publisher/offer and SKU 
## as this is scripted version example we will use hardcoded values 

#$publisherName = ( Get-AzureVMImagePublisher -Location $selectedAzureLocation ).PublisherName | Out-GridView -Title 'Select a VM Image Publisher ...'  -PassThru
#$offerName = ( Get-AzureVMImageOffer -PublisherName $publisherName -Location $selectedAzureLocation ).Offer | Out-GridView -Title 'Select a VM Image Offer ...' -PassThru
#$skuName = ( Get-AzureVMImageSku -PublisherName $publisherName -Offer $offerName -Location $selectedAzureLocation ).Skus |Out-GridView -Title 'Select a VM Image SKU' -PassThru

$publisherName = 'MicrosoftWindowsServer'
$offerName='WindowsServer'
$skuName='2016-Technical-Preview-3-with-Containers'

# Take latest version
$version = 'latest'

# We will use basic version of VMs - later we will be able to provision it further
$vmSize = 'Basic_A1'

# Get credentials for admin account - you may want to modify username
$vmAdminCreds = Get-Credential Adminowski -Message 'Provide credentials for admin account'

# array to hold VMs
$vm = @()

# Create VMs
for ($count = 1; $count -le $VMcount; $count++) 
{ 
    
    # create suffixed VM name
    $vmName = "vm-${count}"

    # Check if resource already exists
    if (!(Test-AzureResource -ResourceName $vmName -ResourceType 'Microsoft.Compute/virtualMachines' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
    {
        
        $vmIndex = $count - 1

        $osDiskLabel = 'OSDisk'
    
        $osDiskName = "${rscprefix}-${vmName}-osdisk"

        $osDiskUri = $sa.PrimaryEndpoints.Blob.ToString() + "vhds/${osDiskName}.vhd"

        $dataDiskSize = 200 # Size in GB

        $dataDiskLabel = 'DataDisk01'

        $dataDiskName = "${rscprefix}-${vmName}-datadisk01"

        $dataDiskUri = $sa.PrimaryEndpoints.Blob.ToString() + "vhds/${dataDiskName}.vhd"

        $vmConfig =  New-AzureVMConfig -VMName $vmName -VMSize $vmSize | `
            Set-AzureVMOperatingSystem `
                -Windows `
                -ComputerName $vmName `
                -Credential $vmAdminCreds `
                -ProvisionVMAgent `
                -EnableAutoUpdate |
            Set-AzureVMSourceImage `
                -PublisherName $publisherName `
                -Offer $offerName `
                -Skus $skuName `
                -Version $version |
            Set-AzureVMOSDisk `
                -Name $osDiskLabel `
                -VhdUri $osDiskUri `
                -CreateOption fromImage |
            Add-AzureVMDataDisk `
                -Name $dataDiskLabel `
                -DiskSizeInGB $dataDiskSize `
                -VhdUri $dataDiskURI `
                -CreateOption empty |
            Add-AzureVMNetworkInterface `
                -Id $networkInterfaces[$vmIndex].Id `
                -Primary

        New-AzureVM `
            -VM $vmConfig `
            -ResourceGroupName $resourceGroup.ResourceGroupName `
            -Location $selectedAzureLocation `
            -Tags $tags

    }
    else
    {
        # Get the VM if already provisioned
        $vm += Get-AzureVM -Name $vmName -ResourceGroupName $resourceGroup.ResourceGroupName
    }


}

#endregion

 

Look at completed action

Once the whole script completes we get direct access to our newly created reources . It looks good and is a worth noting starting point for autoation and orchestration. From here next logical step is to describe how this infrastructure should be configured with DSC and that is something we will do with on of our next posts.

 

2015-09-29_11h30_31

 

Happy powershelling 🙂

 

 

 

Script in full

2

ASP.NET 5 – Dependency injection with AutoFac

Today we will shift a bit from previous tracks in order to research more on Visual Studio 2015 powering us with MVC6 / ASP.NET 5 . I personally find that Microsoft is going into right direction – especially being so open source.

But coming back to original subject of this post. When you create a new project in VS2015 and select .Net5 we can see that this is till being in preview – therefore it might be that information provided to you in this post are already out of date! Therefore I recommend you do take it under account.

For .Net 5 documentation look here . And if you are more interested in Autofac check documentation here

Startup.cs

        public IServiceProvider ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();

            //create Autofac container build
            var builder = new ContainerBuilder();

            //populate the container with services here..
            builder.RegisterType<DemoService>().As<IProjectDemo>();
            builder.Populate(services);

            //build container
            var container = builder.Build();

            //return service provider
            return container.ResolveOptional<IServiceProvider>();
        }

 

Project.json

  "dependencies": {
    "Autofac": "4.0.0-beta6-110",
    "Autofac.Framework.DependencyInjection": "4.0.0-beta6-110",
    "Microsoft.AspNet.Mvc": "6.0.0-beta6",
    "Microsoft.AspNet.Server.IIS": "1.0.0-beta6",
    "Microsoft.AspNet.Server.WebListener": "1.0.0-beta6",
    "Microsoft.AspNet.StaticFiles": "1.0.0-beta6"

  },

 

What I also learned at this stage – it is not smart to mix different beta versions. So if possible try to keep them on the same level. Hope this helps and will get you going!

 

We will be defenitely visiting autofac in later posts when we will play around with creating REST services or other apps!

0

Running ElasticSearch/Kibana and Logstash on Docker

In todays world if you are new to the combination of words in subject means you need to quickly catch up 😀 In IT world Docker is introducing new way to how we operate. Days when you need 20 sysadmins to make deployment successful are now long gone. You could say nowadays we get DevOps that with a click of a button change the world 😀

Today we will discuss how with Docker running  ElasticSearch + Logstash and Kibana you can visualise your environment behaviour and events. At this stage i would like to point out that this can be useful not only in IT where you get insights to what is going on with your infrastructure but also it has a great potential in the era of IoT . In a single “go” you will build required components to see its potential.

Since this will be only touching real basics I will try to point you to more interesting sources of information.

The whole excercise will be done on host running Ubuntu with the following version installed

Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty

I already have followed Docker docs on installing Docker engine on this OS. So make sure you have the engine installed.

As a quick verification this is verison of Docker running during writeup of this post

Client:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

 

So since we got that ready let’s fire up an instance of ElasticSearch. Since we would like to store data outside of the container we need to make folder somewhere on the host. Since this is only non-production excercise I will just use simple folder in root structure. For this purposes I have created folder called cDocker and within there created subfolder data/elasticsearch . This can be achieved by running the following in console

sudo mkdir -p cDocker/data/elasticsearch

Once ready we can kick off creation of our container

sudo docker run -d --name elasticsearch -p 9200:9200 -v cDocker/data/elasticsearch:/usr/share/elasticsearch/data elasticsearch

After a moment of pulling all required image layers we can see container running on our Docker host

docker_elasticSearch_createdok

 

For communicating with API you can see we have exposed port 9200. For ease of making API calls I will be using Postman addon for Chrome. With that we wlll send GET request to address of http(s)://<IP>:9200/_status which in return should come back with our instance status. In my case everything works out of the box so reply looks following

elasticsearch_api_status_ok

 

For the next part we will create LogStash container. We do this by creating container based on LogStash image. The main difference here is that we will link our elasticsearch container so they will be able to talk to each other.

docker run -d --name logstash -p 25826:25826 -p 25826:25826/udp -v $(pwd)/conf:/conf --link elasticsearch:db logstash logstash -f /conf/first.conf

In the above we expose port 25826 TCP/UDP and mount volume for configuration (here I use $(pwd) for existing folder in my current console session ) . Next we link our elasticsearch container and give it db alias.  Remaining is the name of the image and initial command to be executed.

Now if you paid close attention i specified that we will be using config file called first.conf since that file does not exist we must create it. Contents of those file come directly from Logstash documentation and are real basic configuration enabling us to see working solution

Now if I open 2 session windows – one to tail logstash container logs and other one to create a telnet connection to 25826 then we will see message I type into telnet session will get translated and forwarded to elasticsearch.

logstash_testmessage_ok

 

Of course this kind of configuration in this instance is only good for excercise and shows quickly how nicely we can get the system running

So since thats ready it’s time to set up Kibana. Its quite easy using the default image from Docker Hub . I have choosen to link containers for ease of this excercise

docker run --name kibana -e --link elasticsearch:elasticsearch -p 5601:5601 -d kibana

And now seconds later we can login to our Kibana server and take a look on our forensic details 🙂 The message we sent before as a test message is already visible! How cool is that 😀 ?

 

 

kibana_firstevent_test

Lets add some extra fake messages to make some visualisation of it. I will be doing that using telnet command and sending some dummy messages to logstash

After thats done 🙂 we can then create visualizations – and from there onwards … so awesome dashboards. For purposes of this excercise I have just created basic pie charts to show you how it can look like. Of course there is much more power there and you should explore resources available for this if you want to do more 😀

kibana_firstdashboardtest

 

Well that concludes this short introduction to logging with ELK stack. There are of course a lot of other considerations when setting this up for production. Using redis to avoid bottleneck with lost messages / avoid complex message parsing etc. We will try to look into some of those in upcoming posts!

 

 

0

Powershell – first character to upper

This is just a quick write up If you need to change with Powershell first character then the good way to do it would be something like

$variableToChange = "all lowercase"
$result = -join ($variableToChange.Substring(0,1).ToUpper() ,$variableToChange.Substring(1,$variableToChange.Length-1 ) )

 

As alternative you can use title captialization of the whole title with

Get-Culture).TextInfo.ToTitleCase('rafpe ninja')

 

0

PowerShell – using Nlog to create logs

If you are after a logging framework I can recommend you one I have been using not only in Windows but also in C# development for web projects. Its called NLOG and it is a quite powerfull , allowing you to log not only in specific format or layout – but also do the things to have reliable logging ( by having i.e. multiple targets with failover ) with required performance ( i.e. Async writes ). Thats not all! Thanks to out of the box features you can log to flat files , databases , network endpoints , webapis … thats just great!

The Nlog is available in GitHub here so I can recommend that you go there and get your self familiar with the Wiki explaining usage and showing some examples.

At this point of time I can tell you that you can use XML config file either configure logger on the fly before creation. In this post I will show you the both options so you would be able to choose best.

 

The high level process looks following :

  1. Load assembly
  2. Get configuration ( or create it )
  3. Create logger
  4. Start logging

 

Nlog with XML configuration file

The whole PowerShell script along with configuration module looks following :

Now the thing that can be of interest of you .. the waywe load our assembly. What I use here is loading of byte array and then passing that as parameter to assembly load method.

$dllBytes = [System.IO.File]::ReadAllBytes( "C:\NLog.dll")
[System.Reflection.Assembly]::Load($dllBytes)

Reason to do this that way is to avoid situations where we would have the file locked by ‘another process’. I had that in the past and with this approach it will not happen 🙂

 

The next part with customized data – is used when we would like to pass custom fields into our log. The details are described here on Nlog page

 

After that I’m loading configuration and assigning it

$xmlConfig                       = New-Object NLog.Config.XmlLoggingConfiguration("\\pathToConfig\NLog.config")
[NLog.LogManager]::Configuration = $xmlConfig

 

Nlog with configuration declared on the fly

As promised it might be that you would like to use Nlog with configuration done on fly instead of centralized one. In the example below I will show you file target as one of the options. There is much more so yu may want to explore remaining options

    # Create file target
    $target = New-Object NLog.Targets.FileTarget  

    # Define layout
    $target.Layout       = 'timestamp=${longdate} host=${machinename} logger=${logger} loglevel=${level} messaage=${message}'
    $target.FileName     = 'D:\Tools\${date:format=yyyyMMdd}.log'
    $target.KeepFileOpen = $false
    
    # Init config
    $config = new-object NLog.Config.LoggingConfiguration

    # Add target 
    $config.AddTarget('File',$target)

    # Add rule for logging
    $rule1 = New-Object NLog.Config.LoggingRule('*', [NLog.LogLevel]::Info,$target)
    $config.LoggingRules.Add($rule1)

    # Add rule for logging
    $rule2 = New-Object NLog.Config.LoggingRule('*', [NLog.LogLevel]::Off,$target)
    $config.LoggingRules.Add($rule2)

    # Add rule for logging
    $rule3 = New-Object NLog.Config.LoggingRule('*', [NLog.LogLevel]::Error,$target)
    $config.LoggingRules.Add($rule3)

    # Save config
    [NLog.LogManager]::Configuration = $config

    $logger = [NLog.LogManager]::GetLogger('logger.name')

 

Engineers…. Start your logging 🙂

Once done not much left 😀 you can just start logging by typing :

$logger_psmodule.Info('some info message')
$logger_psmodule.Warn(('some warn message')
$logger_psmodule.Error(('some error message')

 

 

1

Docker on Windows – Running Windows Server 2016

So without any further delays we go ahead and create our environment to play around with containers however this time we will do it on windows!

As you know with release of Windows Server 2016 TP3 we have now the ability to play around with containers on Windows host. Since Docker is under heavy development it is possible that a lot will change in RTM therefore check out for any updates to this post 😀

If you are happy automating admin like I’m probably one of the first commands you run would be ….

powershell.exe

… 🙂 of course – the windows engineer best friend !

 

To get this show stared I’m using Windows Server 2016 TP3 on Azure as that gives the biggest flexibility. Microsoft have posted already some good points how to get started using Docker. That documentation (or more less technical guide ) is available here. It explains how to quickly get started.

So we start off by logging into our Windows host and starting off powershell session :

ws2016tp3_ps1

 

Cool thing about which I wasn’t aware is syntax highlighting (something that ppl on unix had for a while 🙂 ) which makes working with PS and its output more readable (in my opinion)

So as mentioned in my previous post you have option to manage containers with Docker (as we now know it on ubuntu i.e. ) or with PowerShell. Since I have been working with with Docker already I decided to investigate that route and leave powershell for a bit later .

Following documentation which I have linked above we can see that Microsoft have been really good and prepared for us script which will take of initial configuration and download of all necessary docker tools.

In order to download it we need to execute the following command :

wget -uri http://aka.ms/setupcontainers -OutFile C:\ContainerSetup.ps1

If you would rather to get to the source of the script its available here

Once downloaded you can just start the script and it will take care of required configuration and download of the images. Yes … downloading of those images can take a while. Its approximately ~18GB of data which have been downloaded. So you may want to start the configuration before your favourite TV show or maybe a game of football in park 😀

Once completed we have access to the goodies – we can start playing with Docker. First thing which is good to do is to check out our Docker information … easily done by

Docker info

In my case the output is following :

ws2016tp3_docker_info

 

Out of my head what is definitely worth of investigating is logging driver ( when used a bit differently allows you to throw Docker logs to centralised system i.e. ElasticSearch … but about that a bit later 😀 ). Rest we will investigate along with this learning series of Docker on Windows.

Now what would a Docker be without images! After running that long time taking configuration process we get access to windows images prepared for us. If you have not yet been playing around with those then you can get them by issuing

docker images

With that we get the available images :

ws2016tp3_docker_images

First thing to notice is that approximate size of default image is ~9,7GB which is a question in this days – is it a lot ? I think you need to answer this question by yourself 🙂 or by waiting for MS to provide a bit more details (unless those our out and I haven’t found them 🙂 ). At the moment with my experience with Docker on Ubuntu – set up of Linux host and containers is a matter of minutes. So that GB of data on Windows might be a bit of show stopper on throwing Windows hosts for Docker.

Now since we have our image it might be useful to get more detailed information. We can get them by issuing command

docker inspect <Container Id> | <Image Id>

The results are following :

[
{
    "Id": "0d53944cb84d022f5535783fedfa72981449462b542cae35709a0ffea896852e",
    "Parent": "",
    "Comment": "",
    "Created": "2015-08-14T15:51:55.051Z",
    "Container": "",
    "ContainerConfig": {
        "Hostname": "",
        "Domainname": "",
        "User": "",
        "AttachStdin": false,
        "AttachStdout": false,
        "AttachStderr": false,
        "ExposedPorts": null,
        "PublishService": "",
        "Tty": false,
        "OpenStdin": false,
        "StdinOnce": false,
        "Env": null,
        "Cmd": null,
        "Image": "",
        "Volumes": null,
        "VolumeDriver": "",
        "WorkingDir": "",
        "Entrypoint": null,
        "NetworkDisabled": false,
        "MacAddress": "",
        "OnBuild": null,
        "Labels": null
    },
    "DockerVersion": "1.9.0-dev",
    "Author": "",
    "Config": null,
    "Architecture": "amd64",
    "Os": "windows",
    "Size": 9696754476,
    "VirtualSize": 9696754476,
    "GraphDriver": {
        "Name": "windowsfilter",
        "Data": {
            "dir": "C:\\ProgramData\\docker\\windowsfilter\\0d53944cb84d022f5535783fedfa72981449462b542cae35709a0ffea89
6852e"
        }
    }
}
]

 

So here we go – we will create our first container by running the following command . It will give us regular output and will run in background.

docker run -d --name firstcontainer windowsservercore powershell -command "& {for (;;) { [datetime]::now ; start-sleep -s 2} }

It gives regular output. In order to see what container is outputting you can issue command

docker logs <container Id>|<container name>

 

Thats all fine … but how do we do any customisations to our container ? The process is fairly simple we run a new container and make our changes. Once we are happy with changes we have implemented we can commit the changes and save our image. We will quickly explore this by creating container which would host our IIS web server.

We begin with creating a new container and entering interactive session

docker run -it --name iisbase windowsservercore powershell

Once your container is up we are directly taken to powershell session within that container. We will use the well known way to get base image configured. What we are after here is adding web server role using PS. First lets check if its definitely not installed :

Get-WindowsFeature -Name *web*

ws2016tp3_docker_imagecust01

After that we will just add the web server role and then exit container. Lets issue the command for installation of role :

PS C:\> Add-WindowsFeature -Name Web-Server

ws2016tp3_roleinside_docker

 

Before we exit there is something which is worth of mentioning … speed of containers (at least at the moment of writing this blog where ppl at MS are still working on it 🙂 ). It can be significantly improved by removing anti malware service from you base image . This can be done by running the following command :

Uninstall-WindowsFeature -Name Windows-Server-Antimalware

 

Now we can exit our container by simply typing

exit

Small thing worth of mentioning 🙂 clipboard string content paste into containers have been limited to ~50 characters which is a work in progress and should be lifted up in next releases.

 

Ufff so we got to the point where our container have been configured it. Its time to build image from it. This can be done (at the moment ) only on containers which are stopped. To execute commit run :

# docker commit <container Id> | <contaner Name> <RepoName> 
docker commit 5e5f0d34988a rafpe:iis 

 

The process takes a bit of time however once completed we have access to our new image which allows us to spin multiple containers 🙂 If you would like to evaluate the image created you can use the approach and commands discussed earlier in this post.

ws2016tp3_docker_commitedimage

 

And as you can see our new image rafpe is slightly bigger than the base image (this is due changes we made).

Let’s go ahead and do exactly what we waited for – spin a new container base on this image

docker run -d --name webos -p 80:80 rafpe:iis

Now at the moment of writing I could not get connected on exposed port 80 from container-host by issuing something in lines of

curl 127.0.0.1:80

According to information which I have found on MSDN forums people are experiencing the same behaviour . Which means (if enabled on firewall ) you can get from external systems to your container exposed port (check if you have NAT correctly set up ).

 

Now to add something useful if you would like to try different approach for this exercise with Docker. To find different images use the following command :

docker search iis

Uff – I think that information should get you going and please be advised that this is a learning course for me as well 🙂 so if I made some terrible misleading in information here please let me know and  I will update that.

To not leave without pointing you to some good places here are links used by me for this :

 

Hope you liked the article! Stay tuned for more!

 

 

0

MacOs – Multiple terminals with customised color scheme

If you are like me 😀 So not closing yourself only to one operating system you then probably operate between the world of Windows and world of Linux 😀

At the moment I have set up my working environment in a way that allows me to work wit both systems. So one one end I got the new and shiny Windows 10 and on the other boot I got Mac OS.

And on Mac Os I have been looking for software that would give me better control and visibility of my sessions than the standard terminals app. With a bit of looking around I found couple of alternatives and the one that really got my attention is iTerm

The way it looks its more than satisfying 🙂 You can see detailed view of horizontal split on the screen below :

Screeny Shot 29 Aug 2015 13.15.42

 

The program has a lot of cool color schemes to offer. What I also did was to edit my profile file accordingly to the mentioned solutions in this post

If you rather just to get details of modification here it is :

export CLICOLOR=1
export LSCOLORS=GxFxCxDxBxegedabagaced
export PS1='\[\033[01;32m\]\[email protected]\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '

 

Hope this will help you to customise your environment up to your needs 😀 If you are using other helpful tools feel free to share in comments!