0

Powershell with Azure – deploy IaaS with Azure Resource Manager

Thesedays managing cloud should be somethining that is well automated and what you can be really comfortable with. Microsoft Azure when using Azure Resource Manager allows you to manage infrastructure via APIs or via Powershell (which is calling webApis then ).

I have to say that both of the approaches are quite nice. I have already worked some time ago on ARM json templates ( using Visual Studio Addon ) and they enable you to perform advanced operations in declarative way.

Good news is that we can also do that with Powershell as well. I’m aware that all over internet you can find ready scripts that will do deployments with a click of a button 🙂 but it goes about excercising 🙂 as thats how we learn.

First what you should make sure of is that you have Azure Powershell module installed. For now I always have been using WebPlatform installer . Once installed you should have it listed when query for modules

PS S:\get-module *azure* -listavailable
ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   0.9.8      Azure                               {Disable-AzureServiceProjectRemoteDesktop, Enable-AzureSer...

 

With the above being prerequisite we can continue and go further with our excercise. Our target will be to deploy 4 virtual machines. First of them will become a domain controller and should have static IP address. Remaining will be using dynamic addresses. Also we will not be creating availability groups and we will only have one public IP address ( we will investigate different setup in one of next posts ) which will expose 3389 port for us ( we will be restricting that via security groups altough )

Of course I dont have to remind you that for this you need valid Azure subscription ( but I assume you have one – even trial 🙂  ). The script as a whole is available via github and will be linked by the end of this post.

 

General setup

First we start of with setting up our azure subscription credentials and defining subscription details and amoount of VMs to be created.  Here we start off with getting our credentials (if we would use Azure AD and delegate credentials to newly created user we could pass PScredential object as argument ) . Later on we select one of available subscriptions (we can use out-grid to give enduser option of selecting ).

 

 

#region Setup subscription and script variables

## Get up your credentials (only if using Azure AD )
# credentialSubscription = Get-Credential 
# Add-AzureAccount -Credential 

## Enable debugging
$DebugPreference ='Continue'


## Login into your subscription
Add-AzureAccount 

# using grid view select a subscription - in scripted scenario you will have the ID supplied to this script 
$subscriptionId =  (Get-AzureSubscription |Out-GridView -Title 'Choose your subscription' -PassThru).SubscriptionId

# Since user can click cancel if we dont have a subscriptionId we quit - again only for non automated scenario 
if ( [string]::IsNullOrEmpty( $subscriptionId ) ) { return }

# If you have more than 1 subscription associated it might be handy to choose current :) 
Select-AzureSubscription -SubscriptionId $subscriptionId -Current

## Switch to ARM - be aware that this functionality will be deprecated 
## https://github.com/Azure/azure-powershell/wiki/Deprecation-of-Switch-AzureMode-in-Azure-PowerShell 
Switch-AzureMode AzureResourceManager

## Check available locations 
$azureLocations= Get-AzureLocation 

## Check which locations we have available 
($azureLocations | Where-Object Name -eq ResourceGroup).Locations

## Select one location (we could use the grid view however I will deploy in West Europe)
$selectedAzureLocation = 'West Europe'

## Check registrered providers - also useful when looking for resources 
Get-AzureProvider

## Define resources prefix that we will use across this script 
## If this would be coming from variable outside of script we would need to make sure its lowercase 
$rscprefix = ('armdeploy').ToLower()

## Create tags to be used later 
$tags = New-Object System.Collections.ArrayList
$tags.Add( @{ Name = 'project'; Value = 'armdeploy' } )
$tags.Add( @{ Name = 'env'; Value = 'demo' } )

## Another way to create tags 
$tags = @( @{ Name='project'; Value='armdeploy' }, @{ Name='env'; Value='demo'} )

## Number of VMs to create 
$VMcount=4

#endregion

 

What is the most important here is the part of switching azure operations mode done with :

Switch-AzureMode AzureResourceManager

This command have been deprecated and will not be available in future! Please take a look at post here for more detailed information!

And since I like to be in control of whats going on I tend to change output to be more verbose on debug side. This is done easily bu specyfying :

## Enable debugging
$DebugPreference ='Continue'

 

Create resource group

Within Azure nowadays we have concept of resource groups which are form of “containers” for keeping resources related to each other together. So if we want to create new obects we must start with resource group. Creating of it its quite easy.

#region Create Azure Resource Group

## Prepare resource group name 
$rscgName = "rg-${rscprefix}"

## Check if our resource group exists 
If (!(Test-AzureResourceGroup -ResourceGroupName $rscgName))
    {
        # Does not exist - create it - also set up deployment name
        $resourceGroup = New-AzureResourceGroup -Name $rscgName -Location $selectedAzureLocation -Tag $tags -DeploymentName "deploy-${rscgName}"
    }
else
    {
        # Exists - get the resource by resource group name
        $resourceGroup = Get-AzureResourceGroup -Name $rscgName
    }


#endregion

 

Through the rest of the post you will see me checking for resources using Test-<typeOfResource> however looking at gitHub shows that some of those are depracated as well. So it migth be that this part will require a bit of rework.

Create storage account

In order to store OS and data disks we must have object within azure. And here we can utilize Azure Storage accounts. For this we create account – but in real life scenario you would just go ahead and use existing one for example.

#region Create Azure Storage Account

# Since we need to store the virutal machine data somewhere we need storage account 
# we can script in a way that if it does not exist it will be created within our resource group

## Storage account name 
$saName="sa${rscprefix}"


## handy method to find valid types for storage account type

# First we get our command 
$gc= get-command New-AzureStorageAccount

# Then we navigate to property holding attributes which are TypeId of System.Management.Automation.ValidateSetAttribute 
# so it is clearly visible that those will be validate set values 
$gc.Parameters['Type'].Attributes.ValidValues


# Based on the above lets choose a type for the storage account 
$saType = 'Standard_LRS'


## Here we will check if we have storage account within our resource group
if (!(Test-AzureResource -ResourceName $saName -ResourceType 'Microsoft.Storage/storageAccounts' -ResourceGroupName $resourceGroup)) 
    {
        # No storage account so lets go ahead and create it based on parameters we have above
        $sa = New-AzureStorageAccount -Name $saName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -Type $saType

    }
else
    {
        # Storage account exists - lets grab its resource 
        $storageAccount = Get-AzureStorageAccount -ResourceGroupName $resourceGroup -Name $saName
    }


## Once this is completed lets set subscription storage account since we do have ID already 
Set-AzureSubscription -SubscriptionId $subscriptionId -CurrentStorageAccountName $saName


#endregion

 

Create virtual networks

In order to have networking running properly you need network. I really like concept of virtual networks and subnets and with this connecting directly with network interfaces and other objects things start to make sense – it all interconnects 🙂

#region Create Azure Virtual Network

$vnetName = "vnet-${rscprefix}"


$vNetsubnet1Name = "${rscprefix}-subnet1"

$vNetsubnet2Name = "${rscprefix}-subnet2"

# Create Virtual Network if it doesn't exist
if (!(Test-AzureResource -ResourceName $vnetName -ResourceType 'Microsoft.Network/virtualNetworks' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
{
    
    # Create first subnet 
    $vSubnet1 = New-AzureVirtualNetworkSubnetConfig -Name $vNetsubnet1Name -AddressPrefix '10.0.1.0/24'

    # Create second subnet
    $vSubnet2 = New-AzureVirtualNetworkSubnetConfig -Name $vNetsubnet2Name -AddressPrefix '10.0.2.0/24'

    # Create virtual network
    $vNetwork = New-AzureVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AddressPrefix '10.0.0.0/16' -Subnet $vSubnet1, $vSubnet2 -Tag $tags

} 
else 
{

    # retrieve virtual network if already exists
    $vNetwork = Get-AzureVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup.ResourceGroupName

}

#endregion

You can see in above that I create 2 subnets. Altough I could get away with one – next one we might use in upcoming posts

 

Create public IP address

As mentioned before – I’m after a really simple set up here. So I will just create single public IP address (and make sure it is resolvable with DNS ) which I will be using later to connect to VMs

#region Create Azure Public IP address

$publicIPname = "pip-${rscprefix}" # PublicIP => pip

$publicIPdns = "dns-${rscprefix}"  # this will be our DNS name


## Here we check pi
$retryCntDns = 0

do
{
    $retryCntDns++
    $publicIPdns="dns-${rscprefix}-${retryCntDns}"
    $domainAvailable = ( Test-AzureDnsAvailability -DomainQualifiedName $publicIPdns -Location $selectedAzureLocation )
}
while(!$domainAvailable -or $retryCntDns -eq 3  ) 

# Check if we have our resource already existing 
if (!(Test-AzureResource -ResourceName $publicIPname -ResourceType 'Microsoft.Network/publicIPAddresses' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
{
    If (!$domainAvailable)
    {
        # we dont have public domain available here - we create without DNS entry
        $publicIp = New-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AllocationMethod Dynamic -Tag $tags
    }
    else
    {
        # We do have dns available - lets create it with DNS name
        $publicIp = New-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AllocationMethod Dynamic -DomainNameLabel $publicIPdns -Tag $tags
    }
    

} 
else 
{
    # Seems like we have already public IP address so we can just go ahead and retrieve it
    $publicIp = Get-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName
}

#endregion

 

Create network security group

To provide security we can now define ACLs on objects like subnets / network interfaces which allows us to have granular security. Below I will just create one for remote desktop access (in this example allow from any destination – which is not a good thing in production )

#region Create Network Security Group & Rules

# Define unique name for NSG resource
$nsgName = "nsg-${rscprefix}"


if (!(Test-AzureResource -ResourceName $nsgName -ResourceType 'Microsoft.Network/networkSecurityGroups' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
{

    # Create RDP access rule (at the script allow from everywhere - you should investigate this for your environment security)
    $nsgRule_RDP = New-AzureNetworkSecurityRuleConfig `
        -Name 'allow-in-rdp' `
        -Description 'Allow Remote Desktop Access' `
        -SourceAddressPrefix * `
        -DestinationAddressPrefix * `
        -Protocol Tcp `
        -SourcePortRange * `
        -DestinationPortRange 3389 `
        -Direction Inbound `
        -Access Allow `
        -Priority 100

    # Create Network Security Group with Rule above 
    $nsg = New-AzureNetworkSecurityGroup -Name $nsgName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SecurityRules $nsgRule_RDP -Tag $tags

} 
else 
{
    # Get NSG if already created
    $nsg = Get-AzureNetworkSecurityGroup -Name $nsgName -ResourceGroupName $resourceGroup.ResourceGroupName
}

#endregion

 

Create network interfaces

Now to all connect to each other we create network interfaces. To first interface we will add additionaly our public IP address

#region Define network interfaces 

$networkInterfaces = @() # We will use this array to hold our network interfaces

# For each VM we will create a network interface

for ($count = 1; $count -le $VMcount; $count++) 
{

    $nicName = "${rscprefix}-nic${count}"

    if (!(Test-AzureResource -ResourceName $nicName -ResourceType 'Microsoft.Network/networkInterfaces' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
    {

        $nicIndex = $count – 1
        
        # The first VM will be our domain controller/DNS and it needs static IP address 
        if ($count -eq 1)
        {
            $networkInterfaces += New-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SubnetId $vNetwork.Subnets[0].Id -NetworkSecurityGroupId $nsg.Id -IpConfigurationName 'ipconfig-dc01' -PrivateIpAddress 10.0.1.4 -PublicIpAddressId $publicIp.Id
        }
        else
        {
            $networkInterfaces += New-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SubnetId $vNetwork.Subnets[0].Id -NetworkSecurityGroupId $nsg.Id
        }
         

    } 
    else 
    {
        # retrieve existing
        $networkInterfaces += Get-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName
    }

}

#endregion

 

Provision VMs

And now the time has come to finally provision virtual machines based on resources we got prepared for this.

#region Provision virtual machines


## If you would like to you could use those to present enduser (or yourself) with visual option to choose publisher/offer and SKU 
## as this is scripted version example we will use hardcoded values 

#$publisherName = ( Get-AzureVMImagePublisher -Location $selectedAzureLocation ).PublisherName | Out-GridView -Title 'Select a VM Image Publisher ...'  -PassThru
#$offerName = ( Get-AzureVMImageOffer -PublisherName $publisherName -Location $selectedAzureLocation ).Offer | Out-GridView -Title 'Select a VM Image Offer ...' -PassThru
#$skuName = ( Get-AzureVMImageSku -PublisherName $publisherName -Offer $offerName -Location $selectedAzureLocation ).Skus |Out-GridView -Title 'Select a VM Image SKU' -PassThru

$publisherName = 'MicrosoftWindowsServer'
$offerName='WindowsServer'
$skuName='2016-Technical-Preview-3-with-Containers'

# Take latest version
$version = 'latest'

# We will use basic version of VMs - later we will be able to provision it further
$vmSize = 'Basic_A1'

# Get credentials for admin account - you may want to modify username
$vmAdminCreds = Get-Credential Adminowski -Message 'Provide credentials for admin account'

# array to hold VMs
$vm = @()

# Create VMs
for ($count = 1; $count -le $VMcount; $count++) 
{ 
    
    # create suffixed VM name
    $vmName = "vm-${count}"

    # Check if resource already exists
    if (!(Test-AzureResource -ResourceName $vmName -ResourceType 'Microsoft.Compute/virtualMachines' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
    {
        
        $vmIndex = $count - 1

        $osDiskLabel = 'OSDisk'
    
        $osDiskName = "${rscprefix}-${vmName}-osdisk"

        $osDiskUri = $sa.PrimaryEndpoints.Blob.ToString() + "vhds/${osDiskName}.vhd"

        $dataDiskSize = 200 # Size in GB

        $dataDiskLabel = 'DataDisk01'

        $dataDiskName = "${rscprefix}-${vmName}-datadisk01"

        $dataDiskUri = $sa.PrimaryEndpoints.Blob.ToString() + "vhds/${dataDiskName}.vhd"

        $vmConfig =  New-AzureVMConfig -VMName $vmName -VMSize $vmSize | `
            Set-AzureVMOperatingSystem `
                -Windows `
                -ComputerName $vmName `
                -Credential $vmAdminCreds `
                -ProvisionVMAgent `
                -EnableAutoUpdate |
            Set-AzureVMSourceImage `
                -PublisherName $publisherName `
                -Offer $offerName `
                -Skus $skuName `
                -Version $version |
            Set-AzureVMOSDisk `
                -Name $osDiskLabel `
                -VhdUri $osDiskUri `
                -CreateOption fromImage |
            Add-AzureVMDataDisk `
                -Name $dataDiskLabel `
                -DiskSizeInGB $dataDiskSize `
                -VhdUri $dataDiskURI `
                -CreateOption empty |
            Add-AzureVMNetworkInterface `
                -Id $networkInterfaces[$vmIndex].Id `
                -Primary

        New-AzureVM `
            -VM $vmConfig `
            -ResourceGroupName $resourceGroup.ResourceGroupName `
            -Location $selectedAzureLocation `
            -Tags $tags

    }
    else
    {
        # Get the VM if already provisioned
        $vm += Get-AzureVM -Name $vmName -ResourceGroupName $resourceGroup.ResourceGroupName
    }


}

#endregion

 

Look at completed action

Once the whole script completes we get direct access to our newly created reources . It looks good and is a worth noting starting point for autoation and orchestration. From here next logical step is to describe how this infrastructure should be configured with DSC and that is something we will do with on of our next posts.

 

2015-09-29_11h30_31

 

Happy powershelling 🙂

 

 

 

Script in full

2

ASP.NET 5 – Dependency injection with AutoFac

Today we will shift a bit from previous tracks in order to research more on Visual Studio 2015 powering us with MVC6 / ASP.NET 5 . I personally find that Microsoft is going into right direction – especially being so open source.

But coming back to original subject of this post. When you create a new project in VS2015 and select .Net5 we can see that this is till being in preview – therefore it might be that information provided to you in this post are already out of date! Therefore I recommend you do take it under account.

For .Net 5 documentation look here . And if you are more interested in Autofac check documentation here

Startup.cs

        public IServiceProvider ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();

            //create Autofac container build
            var builder = new ContainerBuilder();

            //populate the container with services here..
            builder.RegisterType<DemoService>().As<IProjectDemo>();
            builder.Populate(services);

            //build container
            var container = builder.Build();

            //return service provider
            return container.ResolveOptional<IServiceProvider>();
        }

 

Project.json

  "dependencies": {
    "Autofac": "4.0.0-beta6-110",
    "Autofac.Framework.DependencyInjection": "4.0.0-beta6-110",
    "Microsoft.AspNet.Mvc": "6.0.0-beta6",
    "Microsoft.AspNet.Server.IIS": "1.0.0-beta6",
    "Microsoft.AspNet.Server.WebListener": "1.0.0-beta6",
    "Microsoft.AspNet.StaticFiles": "1.0.0-beta6"

  },

 

What I also learned at this stage – it is not smart to mix different beta versions. So if possible try to keep them on the same level. Hope this helps and will get you going!

 

We will be defenitely visiting autofac in later posts when we will play around with creating REST services or other apps!

0

Running ElasticSearch/Kibana and Logstash on Docker

In todays world if you are new to the combination of words in subject means you need to quickly catch up 😀 In IT world Docker is introducing new way to how we operate. Days when you need 20 sysadmins to make deployment successful are now long gone. You could say nowadays we get DevOps that with a click of a button change the world 😀

Today we will discuss how with Docker running  ElasticSearch + Logstash and Kibana you can visualise your environment behaviour and events. At this stage i would like to point out that this can be useful not only in IT where you get insights to what is going on with your infrastructure but also it has a great potential in the era of IoT . In a single “go” you will build required components to see its potential.

Since this will be only touching real basics I will try to point you to more interesting sources of information.

The whole excercise will be done on host running Ubuntu with the following version installed

Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty

I already have followed Docker docs on installing Docker engine on this OS. So make sure you have the engine installed.

As a quick verification this is verison of Docker running during writeup of this post

Client:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

Server:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64

 

So since we got that ready let’s fire up an instance of ElasticSearch. Since we would like to store data outside of the container we need to make folder somewhere on the host. Since this is only non-production excercise I will just use simple folder in root structure. For this purposes I have created folder called cDocker and within there created subfolder data/elasticsearch . This can be achieved by running the following in console

sudo mkdir -p cDocker/data/elasticsearch

Once ready we can kick off creation of our container

sudo docker run -d --name elasticsearch -p 9200:9200 -v cDocker/data/elasticsearch:/usr/share/elasticsearch/data elasticsearch

After a moment of pulling all required image layers we can see container running on our Docker host

docker_elasticSearch_createdok

 

For communicating with API you can see we have exposed port 9200. For ease of making API calls I will be using Postman addon for Chrome. With that we wlll send GET request to address of http(s)://<IP>:9200/_status which in return should come back with our instance status. In my case everything works out of the box so reply looks following

elasticsearch_api_status_ok

 

For the next part we will create LogStash container. We do this by creating container based on LogStash image. The main difference here is that we will link our elasticsearch container so they will be able to talk to each other.

docker run -d --name logstash -p 25826:25826 -p 25826:25826/udp -v $(pwd)/conf:/conf --link elasticsearch:db logstash logstash -f /conf/first.conf

In the above we expose port 25826 TCP/UDP and mount volume for configuration (here I use $(pwd) for existing folder in my current console session ) . Next we link our elasticsearch container and give it db alias.  Remaining is the name of the image and initial command to be executed.

Now if you paid close attention i specified that we will be using config file called first.conf since that file does not exist we must create it. Contents of those file come directly from Logstash documentation and are real basic configuration enabling us to see working solution

Now if I open 2 session windows – one to tail logstash container logs and other one to create a telnet connection to 25826 then we will see message I type into telnet session will get translated and forwarded to elasticsearch.

logstash_testmessage_ok

 

Of course this kind of configuration in this instance is only good for excercise and shows quickly how nicely we can get the system running

So since thats ready it’s time to set up Kibana. Its quite easy using the default image from Docker Hub . I have choosen to link containers for ease of this excercise

docker run --name kibana -e --link elasticsearch:elasticsearch -p 5601:5601 -d kibana

And now seconds later we can login to our Kibana server and take a look on our forensic details 🙂 The message we sent before as a test message is already visible! How cool is that 😀 ?

 

 

kibana_firstevent_test

Lets add some extra fake messages to make some visualisation of it. I will be doing that using telnet command and sending some dummy messages to logstash

After thats done 🙂 we can then create visualizations – and from there onwards … so awesome dashboards. For purposes of this excercise I have just created basic pie charts to show you how it can look like. Of course there is much more power there and you should explore resources available for this if you want to do more 😀

kibana_firstdashboardtest

 

Well that concludes this short introduction to logging with ELK stack. There are of course a lot of other considerations when setting this up for production. Using redis to avoid bottleneck with lost messages / avoid complex message parsing etc. We will try to look into some of those in upcoming posts!

 

 

0

Powershell – first character to upper

This is just a quick write up If you need to change with Powershell first character then the good way to do it would be something like

$variableToChange = "all lowercase"
$result = -join ($variableToChange.Substring(0,1).ToUpper() ,$variableToChange.Substring(1,$variableToChange.Length-1 ) )

 

As alternative you can use title captialization of the whole title with

Get-Culture).TextInfo.ToTitleCase('rafpe ninja')