PowerShell – Creating PSObject from template

When working with PowerShell I have came across really cool way to work with PSObjects. It’s being simple as creating one of the underlying object methods. But first things first – let’s create one template object

$AccessRules = New-Object PsObject
$AccessRules.PsObject.TypeNames.Insert(0, "FileSystemAccessRules")
$AccessRules | Add-Member -MemberType NoteProperty -Name subFolder -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name identity -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name rights -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name InheritanceFlags -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name accessControlType -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name preserveInheritance -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name isInherited -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name owner -Value ''
$AccessRules | Add-Member -MemberType NoteProperty -Name PropagationFlags -Value ''


Thats really easy – now its time to simply us it as base for other created objects

$FS_TMP_AR_1 = $AccessRules.psobject.Copy()

$FS_TMP_AR_1.accessControlType = 'Állow'
$FS_TMP_AR_1.identity          = 'BUILTIN\Administrators'
$FS_TMP_AR_1.InheritanceFlags  = "ContainerInherit, ObjectInherit"
$FS_TMP_AR_1.isInherited       = 1
$FS_TMP_AR_1.owner             = "BUILTIN\Administrators"
$FS_TMP_AR_1.preserveInheritance = 1
$FS_TMP_AR_1.rights              = 'FullControl'
$FS_TMP_AR_1.subFolder           = ''
$FS_TMP_AR_1.PropagationFlags ="None"


And thats it – voilla 😉 the whole magic is just hidden under this line



Hope this helps – happy coding!


PowerShell – Azure Resource Manager policies

Microsoft does not stop listening to people. Many of IT professionals is heavily using Azure Resource Manager and the natural course of action is to require better control over what can and what cannot be done.

Simple as it may sound Microsoft has now offered ARM policies.  You may find details from 23:22 min on video below


From the good side Microsoft has already prepared documentation for us which is waiting here.

Is it difficult ? I personally think it is not – altough there is no GUI but who from Engineers this days uses GUI 🙂 you have option to use either REST API or PowerShell cmdlets (communicating over that API 🙂 )

What polciies gives me control over ? It is build over the following principal :

  "if" : {
    <condition> | <logical operator>
  "then" : {
    "effect" : "deny | audit"

As you can see we define conditions and operators and based on that we take action like allow or deny.


At the moment I’m not droping any extra examples – as documentation have already couple of them – so you might to try them out as you read the details.


Happy automating 🙂


PowerShell – Autodocument your modules/scripts using markdown

When writing your scripts or modules have you not wished that it would all autodocument itself ? Isnt this what we should be aiming for when creating automations ? 🙂 So automations would automate documenting themselfes ?

This is exactly what automation should be about and today I’m going to show you how I create automated documentation for extremly big modules in seconds. As mentioned before we will be using MarkDown  so it would be great if you would jump to here and get some more info if this is something new to you.



In order for this to work you must have a good habit of documenting your functions. This is the key to the success. Example of such a function documentation using comment based approach can look as following :

function invoke-SomeMagic
        Creates magical events

        .PARAMETER NumberOfPeople
        This paramter defines how many people are looking at your screen in the time of invoking the cmdlet

        .PARAMETER DifficultyImpression
        This parameter defines how difficult it looks what you are currently doing
        This function executes magical events all around you. By defining parameters you have direct control of how difficult it will seems this is and how many people are watching will have direct influence on range of events.

        invoke-SomeMagic -NumberOfPeople 1 -DifficultyImpression 10

        Creates really difficult looking magic for one person

        invoke-SomeMagic -NumberOfPeople 100 -DifficultyImpression 10

        Creates a magical show

# Function doing something here 🙂 ...........



Auto documenting script

Now what an automation would be without automating it 😀 ? Below is my implementation of autodocumenting to MarkDown.


What I really like here is the fact that it will generate temporary file during documentation (I discovered encoding gives problems with online PDF converter ) . The whole can be changed to suit your needs and layout requirements.


Convert it to PDF

The last stage would be converting it to PDF. At the moment I’m using http://www.markdowntopdf.com/ to convert file prepared by above script. And I must say that results are extremly satisfying.



I have prepared small demo how it works in action. For this purposes I have created demo module with 3 dummy functions and then run the script. Below is snippet of how it looks. As mentioned before – I really like that and that kind of file can be nicely send to other engineer to quickly get the mfamiliar with your module.




Powershell – Network cmdlets

In effort to move away from old school habits of using i.e. nslookup instead of PS cmdlets I thought it would be beneficial if for reference I would reblog quite interesting article about replacement of those cmd into pure PowerShell cmdlets. Original article you can find under here


Used to get ip configuration.


Get-NetIPAddress | Sort InterfaceIndex | FT InterfaceIndex, InterfaceAlias, AddressFamily, IPAddress, PrefixLength -Autosize
Get-NetIPAddress | ? AddressFamily -eq IPv4 | FT –AutoSize
Get-NetAdapter Wi-Fi | Get-NetIPAddress | FT -AutoSize



Check connectivity to target host.


Test-NetConnection www.microsoft.com
Test-NetConnection -ComputerName www.microsoft.com -InformationLevel Detailed
Test-NetConnection -ComputerName www.microsoft.com | Select -ExpandProperty PingReplyDetails | FT Address, Status, RoundTripTime
1..10 | % { Test-NetConnection -ComputerName www.microsoft.com -RemotePort 80 } | FT -AutoSize



Translate IP to name or vice versa


Resolve-DnsName www.microsoft.com
Resolve-DnsName microsoft.com -type SOA
Resolve-DnsName microsoft.com -Server –Type A



Shows the IP routes (also can be used to add/remove )


Get-NetRoute -Protocol Local -DestinationPrefix 192.168*
Get-NetAdapter Wi-Fi | Get-NetRoute



Trace route. Shows the IP route to a host, including all the hops between your computer and that host.

Test-NetConnection –TraceRoute

Test-NetConnection www.microsoft.com –TraceRoute
Test-NetConnection outlook.com -TraceRoute | Select -ExpandProperty TraceRoute | % { Resolve-DnsName $_ -type PTR -ErrorAction SilentlyContinue }




Description: Shows current TCP/IP network connections.


Get-NetTCPConnection | Group State, RemotePort | Sort Count | FT Count, Name –Autosize
Get-NetTCPConnection | ? State -eq Established | FT –Autosize
Get-NetTCPConnection | ? State -eq Established | ? RemoteAddress -notlike 127* | % { $_; Resolve-DnsName $_.RemoteAddress -type PTR -ErrorAction SilentlyContinue }



So happy moving into objectirezed world of PowerShell 🙂



PowerShell – Active Directory changes synchronization with cookie

In today’s post I wanted to show you something that can be of interest for those who need to find recent Active Directory changes but are challenged by i.e. big AD forest with a large amount of object and are hitting performance problems when executing queries.  So where this problem comes from ? Well if you have Active Directory with a lot ( really a lot of objects ) then querying quite often for changes can be troublesome.

But dont worry – there are couple of ways to tackle this challenge. If you look for more details you will find that you can just query information (duh ?! ) / subscribe yourself to be notified when changes occur (push) / or make incremental queries (pull). And today we will exactly investigate querying using synchronization cookie

The principal here is to use cookie which will allow us to poll for changes since last time we queried AD. This way we can have only very specific query and return only subset of properties we are really interested with.


The whole code is quite simple to implement and consits of the following :

And that would be all for this. So from the code above you see that your subsequent requests would be based on changes since last poll (of course based on the query your provided ). In one of next posts we will focus on getting this in C# as some of you may want to do more DevOps







Pester – Introduction to test driven development (TDD) for Powershell

tdd-logo-01Today I wanted to start a series on Pester for PowerShell. If you have not heard about it before you might find it quite interesting. It allows you to write code and test it alongside.

Life example why would this be useful ? Nothing easier to do – imagine complex functions executing chained operations. Making small modification to one piece might not have any drawbacks in the whole operation … but are you sure ? It might appear that this small modification was used somewhere down along the chain and initially impact would not be seen.

And this is where Pester comes to the rescue! By getting into habit of writing in this way you will save yourself from butterfly effects. I can assure that thanks to this approach I was able to avoid several situation where exactly small changes without visible impact would break a lot of things 🙂

Get familiar

Pester is actively developed on Github and you can head to project page . I can recommend to check out Wiki page and open issues as those 2 are extremly useful sources of information.


Install Pester

Well there is not much to say 🙂 With new and shiny Powershell it cannot be simpler than :




And that was it – you are now set up for your first test.


First test

In order to run simple test we will create our selves 2 files. One for our real function and one for tests. Pester makes it really easy and therefore we can use build in cmdlet to prepare those 2 for us :

New-Fixture -Name FirstFunctionTest



Lets make dummy functions in our FirstFunctionTest.ps1 file. I will be really easy on this example 🙂

function FirstFunctionTest
        return 1

And now lets move to file FirstFunctionTest.Tests.ps1 and write the following

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "FirstFunctionTest" {
    It "returns value that is not null" {
        FirstFunctionTest | Should Not BeNullOrEmpty

    It "returns value that is exactly 1" {
        FirstFunctionTest | Should be 1


The majority of the code was prepared by Pester. At the moment I just defined that it must return value and not null and return value must be equal to 1. Great! Let’s run this simple test now


And the results are instant 🙂



When a change was made

So now we will change our function to return something different – so in nutshell we will simulate fact that a change has been made that can have big impact 😀 
pester_first_functesterrorThanks to Pester you would immiediately see this 🙂



This is only a small example showing really the top of what Pester can do. In next posts we will be investigating much more complex scenarios. Stay tuned 🙂




Powershell with Azure – deploy IaaS with Azure Resource Manager

Thesedays managing cloud should be somethining that is well automated and what you can be really comfortable with. Microsoft Azure when using Azure Resource Manager allows you to manage infrastructure via APIs or via Powershell (which is calling webApis then ).

I have to say that both of the approaches are quite nice. I have already worked some time ago on ARM json templates ( using Visual Studio Addon ) and they enable you to perform advanced operations in declarative way.

Good news is that we can also do that with Powershell as well. I’m aware that all over internet you can find ready scripts that will do deployments with a click of a button 🙂 but it goes about excercising 🙂 as thats how we learn.

First what you should make sure of is that you have Azure Powershell module installed. For now I always have been using WebPlatform installer . Once installed you should have it listed when query for modules

PS S:\get-module *azure* -listavailable
ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   0.9.8      Azure                               {Disable-AzureServiceProjectRemoteDesktop, Enable-AzureSer...


With the above being prerequisite we can continue and go further with our excercise. Our target will be to deploy 4 virtual machines. First of them will become a domain controller and should have static IP address. Remaining will be using dynamic addresses. Also we will not be creating availability groups and we will only have one public IP address ( we will investigate different setup in one of next posts ) which will expose 3389 port for us ( we will be restricting that via security groups altough )

Of course I dont have to remind you that for this you need valid Azure subscription ( but I assume you have one – even trial 🙂  ). The script as a whole is available via github and will be linked by the end of this post.


General setup

First we start of with setting up our azure subscription credentials and defining subscription details and amoount of VMs to be created.  Here we start off with getting our credentials (if we would use Azure AD and delegate credentials to newly created user we could pass PScredential object as argument ) . Later on we select one of available subscriptions (we can use out-grid to give enduser option of selecting ).



#region Setup subscription and script variables

## Get up your credentials (only if using Azure AD )
# credentialSubscription = Get-Credential 
# Add-AzureAccount -Credential 

## Enable debugging
$DebugPreference ='Continue'

## Login into your subscription

# using grid view select a subscription - in scripted scenario you will have the ID supplied to this script 
$subscriptionId =  (Get-AzureSubscription |Out-GridView -Title 'Choose your subscription' -PassThru).SubscriptionId

# Since user can click cancel if we dont have a subscriptionId we quit - again only for non automated scenario 
if ( [string]::IsNullOrEmpty( $subscriptionId ) ) { return }

# If you have more than 1 subscription associated it might be handy to choose current 🙂 
Select-AzureSubscription -SubscriptionId $subscriptionId -Current

## Switch to ARM - be aware that this functionality will be deprecated 
## https://github.com/Azure/azure-powershell/wiki/Deprecation-of-Switch-AzureMode-in-Azure-PowerShell 
Switch-AzureMode AzureResourceManager

## Check available locations 
$azureLocations= Get-AzureLocation 

## Check which locations we have available 
($azureLocations | Where-Object Name -eq ResourceGroup).Locations

## Select one location (we could use the grid view however I will deploy in West Europe)
$selectedAzureLocation = 'West Europe'

## Check registrered providers - also useful when looking for resources 

## Define resources prefix that we will use across this script 
## If this would be coming from variable outside of script we would need to make sure its lowercase 
$rscprefix = ('armdeploy').ToLower()

## Create tags to be used later 
$tags = New-Object System.Collections.ArrayList
$tags.Add( @{ Name = 'project'; Value = 'armdeploy' } )
$tags.Add( @{ Name = 'env'; Value = 'demo' } )

## Another way to create tags 
$tags = @( @{ Name='project'; Value='armdeploy' }, @{ Name='env'; Value='demo'} )

## Number of VMs to create 



What is the most important here is the part of switching azure operations mode done with :

Switch-AzureMode AzureResourceManager

This command have been deprecated and will not be available in future! Please take a look at post here for more detailed information!

And since I like to be in control of whats going on I tend to change output to be more verbose on debug side. This is done easily bu specyfying :

## Enable debugging
$DebugPreference ='Continue'


Create resource group

Within Azure nowadays we have concept of resource groups which are form of “containers” for keeping resources related to each other together. So if we want to create new obects we must start with resource group. Creating of it its quite easy.

#region Create Azure Resource Group

## Prepare resource group name 
$rscgName = "rg-${rscprefix}"

## Check if our resource group exists 
If (!(Test-AzureResourceGroup -ResourceGroupName $rscgName))
        # Does not exist - create it - also set up deployment name
        $resourceGroup = New-AzureResourceGroup -Name $rscgName -Location $selectedAzureLocation -Tag $tags -DeploymentName "deploy-${rscgName}"
        # Exists - get the resource by resource group name
        $resourceGroup = Get-AzureResourceGroup -Name $rscgName



Through the rest of the post you will see me checking for resources using Test-<typeOfResource> however looking at gitHub shows that some of those are depracated as well. So it migth be that this part will require a bit of rework.

Create storage account

In order to store OS and data disks we must have object within azure. And here we can utilize Azure Storage accounts. For this we create account – but in real life scenario you would just go ahead and use existing one for example.

#region Create Azure Storage Account

# Since we need to store the virutal machine data somewhere we need storage account 
# we can script in a way that if it does not exist it will be created within our resource group

## Storage account name 

## handy method to find valid types for storage account type

# First we get our command 
$gc= get-command New-AzureStorageAccount

# Then we navigate to property holding attributes which are TypeId of System.Management.Automation.ValidateSetAttribute 
# so it is clearly visible that those will be validate set values 

# Based on the above lets choose a type for the storage account 
$saType = 'Standard_LRS'

## Here we will check if we have storage account within our resource group
if (!(Test-AzureResource -ResourceName $saName -ResourceType 'Microsoft.Storage/storageAccounts' -ResourceGroupName $resourceGroup)) 
        # No storage account so lets go ahead and create it based on parameters we have above
        $sa = New-AzureStorageAccount -Name $saName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -Type $saType

        # Storage account exists - lets grab its resource 
        $storageAccount = Get-AzureStorageAccount -ResourceGroupName $resourceGroup -Name $saName

## Once this is completed lets set subscription storage account since we do have ID already 
Set-AzureSubscription -SubscriptionId $subscriptionId -CurrentStorageAccountName $saName



Create virtual networks

In order to have networking running properly you need network. I really like concept of virtual networks and subnets and with this connecting directly with network interfaces and other objects things start to make sense – it all interconnects 🙂

#region Create Azure Virtual Network

$vnetName = "vnet-${rscprefix}"

$vNetsubnet1Name = "${rscprefix}-subnet1"

$vNetsubnet2Name = "${rscprefix}-subnet2"

# Create Virtual Network if it doesn't exist
if (!(Test-AzureResource -ResourceName $vnetName -ResourceType 'Microsoft.Network/virtualNetworks' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
    # Create first subnet 
    $vSubnet1 = New-AzureVirtualNetworkSubnetConfig -Name $vNetsubnet1Name -AddressPrefix ''

    # Create second subnet
    $vSubnet2 = New-AzureVirtualNetworkSubnetConfig -Name $vNetsubnet2Name -AddressPrefix ''

    # Create virtual network
    $vNetwork = New-AzureVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AddressPrefix '' -Subnet $vSubnet1, $vSubnet2 -Tag $tags


    # retrieve virtual network if already exists
    $vNetwork = Get-AzureVirtualNetwork -Name $vnetName -ResourceGroupName $resourceGroup.ResourceGroupName



You can see in above that I create 2 subnets. Altough I could get away with one – next one we might use in upcoming posts


Create public IP address

As mentioned before – I’m after a really simple set up here. So I will just create single public IP address (and make sure it is resolvable with DNS ) which I will be using later to connect to VMs

#region Create Azure Public IP address

$publicIPname = "pip-${rscprefix}" # PublicIP => pip

$publicIPdns = "dns-${rscprefix}"  # this will be our DNS name

## Here we check pi
$retryCntDns = 0

    $domainAvailable = ( Test-AzureDnsAvailability -DomainQualifiedName $publicIPdns -Location $selectedAzureLocation )
while(!$domainAvailable -or $retryCntDns -eq 3  ) 

# Check if we have our resource already existing 
if (!(Test-AzureResource -ResourceName $publicIPname -ResourceType 'Microsoft.Network/publicIPAddresses' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
    If (!$domainAvailable)
        # we dont have public domain available here - we create without DNS entry
        $publicIp = New-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AllocationMethod Dynamic -Tag $tags
        # We do have dns available - lets create it with DNS name
        $publicIp = New-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -AllocationMethod Dynamic -DomainNameLabel $publicIPdns -Tag $tags

    # Seems like we have already public IP address so we can just go ahead and retrieve it
    $publicIp = Get-AzurePublicIpAddress -Name $publicIPname -ResourceGroupName $resourceGroup.ResourceGroupName



Create network security group

To provide security we can now define ACLs on objects like subnets / network interfaces which allows us to have granular security. Below I will just create one for remote desktop access (in this example allow from any destination – which is not a good thing in production )

#region Create Network Security Group & Rules

# Define unique name for NSG resource
$nsgName = "nsg-${rscprefix}"

if (!(Test-AzureResource -ResourceName $nsgName -ResourceType 'Microsoft.Network/networkSecurityGroups' -ResourceGroupName $resourceGroup.ResourceGroupName)) 

    # Create RDP access rule (at the script allow from everywhere - you should investigate this for your environment security)
    $nsgRule_RDP = New-AzureNetworkSecurityRuleConfig `
        -Name 'allow-in-rdp' `
        -Description 'Allow Remote Desktop Access' `
        -SourceAddressPrefix * `
        -DestinationAddressPrefix * `
        -Protocol Tcp `
        -SourcePortRange * `
        -DestinationPortRange 3389 `
        -Direction Inbound `
        -Access Allow `
        -Priority 100

    # Create Network Security Group with Rule above 
    $nsg = New-AzureNetworkSecurityGroup -Name $nsgName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SecurityRules $nsgRule_RDP -Tag $tags

    # Get NSG if already created
    $nsg = Get-AzureNetworkSecurityGroup -Name $nsgName -ResourceGroupName $resourceGroup.ResourceGroupName



Create network interfaces

Now to all connect to each other we create network interfaces. To first interface we will add additionaly our public IP address

#region Define network interfaces 

$networkInterfaces = @() # We will use this array to hold our network interfaces

# For each VM we will create a network interface

for ($count = 1; $count -le $VMcount; $count++) 

    $nicName = "${rscprefix}-nic${count}"

    if (!(Test-AzureResource -ResourceName $nicName -ResourceType 'Microsoft.Network/networkInterfaces' -ResourceGroupName $resourceGroup.ResourceGroupName)) 

        $nicIndex = $count – 1
        # The first VM will be our domain controller/DNS and it needs static IP address 
        if ($count -eq 1)
            $networkInterfaces += New-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SubnetId $vNetwork.Subnets[0].Id -NetworkSecurityGroupId $nsg.Id -IpConfigurationName 'ipconfig-dc01' -PrivateIpAddress -PublicIpAddressId $publicIp.Id
            $networkInterfaces += New-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName -Location $selectedAzureLocation -SubnetId $vNetwork.Subnets[0].Id -NetworkSecurityGroupId $nsg.Id

        # retrieve existing
        $networkInterfaces += Get-AzureNetworkInterface -Name $nicName -ResourceGroupName $resourceGroup.ResourceGroupName




Provision VMs

And now the time has come to finally provision virtual machines based on resources we got prepared for this.

#region Provision virtual machines

## If you would like to you could use those to present enduser (or yourself) with visual option to choose publisher/offer and SKU 
## as this is scripted version example we will use hardcoded values 

#$publisherName = ( Get-AzureVMImagePublisher -Location $selectedAzureLocation ).PublisherName | Out-GridView -Title 'Select a VM Image Publisher ...'  -PassThru
#$offerName = ( Get-AzureVMImageOffer -PublisherName $publisherName -Location $selectedAzureLocation ).Offer | Out-GridView -Title 'Select a VM Image Offer ...' -PassThru
#$skuName = ( Get-AzureVMImageSku -PublisherName $publisherName -Offer $offerName -Location $selectedAzureLocation ).Skus |Out-GridView -Title 'Select a VM Image SKU' -PassThru

$publisherName = 'MicrosoftWindowsServer'

# Take latest version
$version = 'latest'

# We will use basic version of VMs - later we will be able to provision it further
$vmSize = 'Basic_A1'

# Get credentials for admin account - you may want to modify username
$vmAdminCreds = Get-Credential Adminowski -Message 'Provide credentials for admin account'

# array to hold VMs
$vm = @()

# Create VMs
for ($count = 1; $count -le $VMcount; $count++) 
    # create suffixed VM name
    $vmName = "vm-${count}"

    # Check if resource already exists
    if (!(Test-AzureResource -ResourceName $vmName -ResourceType 'Microsoft.Compute/virtualMachines' -ResourceGroupName $resourceGroup.ResourceGroupName)) 
        $vmIndex = $count - 1

        $osDiskLabel = 'OSDisk'
        $osDiskName = "${rscprefix}-${vmName}-osdisk"

        $osDiskUri = $sa.PrimaryEndpoints.Blob.ToString() + "vhds/${osDiskName}.vhd"

        $dataDiskSize = 200 # Size in GB

        $dataDiskLabel = 'DataDisk01'

        $dataDiskName = "${rscprefix}-${vmName}-datadisk01"

        $dataDiskUri = $sa.PrimaryEndpoints.Blob.ToString() + "vhds/${dataDiskName}.vhd"

        $vmConfig =  New-AzureVMConfig -VMName $vmName -VMSize $vmSize | `
            Set-AzureVMOperatingSystem `
                -Windows `
                -ComputerName $vmName `
                -Credential $vmAdminCreds `
                -ProvisionVMAgent `
                -EnableAutoUpdate |
            Set-AzureVMSourceImage `
                -PublisherName $publisherName `
                -Offer $offerName `
                -Skus $skuName `
                -Version $version |
            Set-AzureVMOSDisk `
                -Name $osDiskLabel `
                -VhdUri $osDiskUri `
                -CreateOption fromImage |
            Add-AzureVMDataDisk `
                -Name $dataDiskLabel `
                -DiskSizeInGB $dataDiskSize `
                -VhdUri $dataDiskURI `
                -CreateOption empty |
            Add-AzureVMNetworkInterface `
                -Id $networkInterfaces[$vmIndex].Id `

        New-AzureVM `
            -VM $vmConfig `
            -ResourceGroupName $resourceGroup.ResourceGroupName `
            -Location $selectedAzureLocation `
            -Tags $tags

        # Get the VM if already provisioned
        $vm += Get-AzureVM -Name $vmName -ResourceGroupName $resourceGroup.ResourceGroupName




Look at completed action

Once the whole script completes we get direct access to our newly created reources . It looks good and is a worth noting starting point for autoation and orchestration. From here next logical step is to describe how this infrastructure should be configured with DSC and that is something we will do with on of our next posts.




Happy powershelling 🙂




Script in full


Powershell – first character to upper

This is just a quick write up If you need to change with Powershell first character then the good way to do it would be something like

$variableToChange = "all lowercase"
$result = -join ($variableToChange.Substring(0,1).ToUpper() ,$variableToChange.Substring(1,$variableToChange.Length-1 ) )


As alternative you can use title captialization of the whole title with

Get-Culture).TextInfo.ToTitleCase('rafpe ninja')



PowerShell – using Nlog to create logs

If you are after a logging framework I can recommend you one I have been using not only in Windows but also in C# development for web projects. Its called NLOG and it is a quite powerfull , allowing you to log not only in specific format or layout – but also do the things to have reliable logging ( by having i.e. multiple targets with failover ) with required performance ( i.e. Async writes ). Thats not all! Thanks to out of the box features you can log to flat files , databases , network endpoints , webapis … thats just great!

The Nlog is available in GitHub here so I can recommend that you go there and get your self familiar with the Wiki explaining usage and showing some examples.

At this point of time I can tell you that you can use XML config file either configure logger on the fly before creation. In this post I will show you the both options so you would be able to choose best.


The high level process looks following :

  1. Load assembly
  2. Get configuration ( or create it )
  3. Create logger
  4. Start logging


Nlog with XML configuration file

The whole PowerShell script along with configuration module looks following :

Now the thing that can be of interest of you .. the waywe load our assembly. What I use here is loading of byte array and then passing that as parameter to assembly load method.

$dllBytes = [System.IO.File]::ReadAllBytes( "C:\NLog.dll")

Reason to do this that way is to avoid situations where we would have the file locked by ‘another process’. I had that in the past and with this approach it will not happen 🙂


The next part with customized data – is used when we would like to pass custom fields into our log. The details are described here on Nlog page


After that I’m loading configuration and assigning it

$xmlConfig                       = New-Object NLog.Config.XmlLoggingConfiguration("\\pathToConfig\NLog.config")
[NLog.LogManager]::Configuration = $xmlConfig


Nlog with configuration declared on the fly

As promised it might be that you would like to use Nlog with configuration done on fly instead of centralized one. In the example below I will show you file target as one of the options. There is much more so yu may want to explore remaining options

    # Create file target
    $target = New-Object NLog.Targets.FileTarget  

    # Define layout
    $target.Layout       = 'timestamp=${longdate} host=${machinename} logger=${logger} loglevel=${level} messaage=${message}'
    $target.FileName     = 'D:\Tools\${date:format=yyyyMMdd}.log'
    $target.KeepFileOpen = $false
    # Init config
    $config = new-object NLog.Config.LoggingConfiguration

    # Add target 

    # Add rule for logging
    $rule1 = New-Object NLog.Config.LoggingRule('*', [NLog.LogLevel]::Info,$target)

    # Add rule for logging
    $rule2 = New-Object NLog.Config.LoggingRule('*', [NLog.LogLevel]::Off,$target)

    # Add rule for logging
    $rule3 = New-Object NLog.Config.LoggingRule('*', [NLog.LogLevel]::Error,$target)

    # Save config
    [NLog.LogManager]::Configuration = $config

    $logger = [NLog.LogManager]::GetLogger('logger.name')


Engineers…. Start your logging 🙂

Once done not much left 😀 you can just start logging by typing :

$logger_psmodule.Info('some info message')
$logger_psmodule.Warn(('some warn message')
$logger_psmodule.Error(('some error message')




PowerShell – using PSDefaultParameterValues to make your life easier

Just a quick post to let you know that you can make your life easier (most of the times 😀 ) when for example using default paramaters for your PowerShell cmdlets. This is done by using $PSDefaultParameterValues which allows you to specify default parameters for parameters in your cmdlets. It will be available in session in which you are working in.

Example ?




Just remember that those settings will alive only in your current powershell session.