Home » Azure (Page 4)

Category Archives: Azure


Welcome to contoso.se! My name is Anders Bengtsson and this is my blog about Azure infrastructure and system management. I am a senior engineer in the FastTrack for Azure team, part of Azure Engineering, at Microsoft.  Contoso.se has two main purposes, first as a platform to share information with the community and the second as a notebook for myself.

Everything you read here is my own personal opinion and any code is provided "AS-IS" with no warranties.

Anders Bengtsson

MVP awarded 2007,2008,2009,2010

My Books
Service Manager Unleashed
Service Manager Unleashed
Orchestrator Unleashed
Orchestrator 2012 Unleashed
Inside the Microsoft Operations Management Suite

Inside Microsoft Azure IaaS [free e-book]

A long time ago in a galaxy… far far away…

We started to work on this book early last year I think it was. It has been a lot of changes, updates, new features and services in Azure. But now finally it is public 🙂 The book provides a hands-on guide to utilizing Infrastructure-as-a-Service (IaaS) resources in Azure, with a primary focus on Azure Virtual Machines. Additional content covered in the book includes Azure PowerShell, Azure Virtual Networking, Azure Storage, Connecting Azure to your Datacenter, Migration, and Backup & Disaster Recovery.

Authors are Ryan Irujo, Janaka Rangama, Pete Zerger and me.

Download your copy of the free e-book here.

happy reading 🙂

Setting up team permissions with custom RBAC role and ARM policy

A common Azure infrastructure scenario is that subscription administrator setup one resource group per service/application. Service administrators or application administrators then are assigned permissions to the resource group and all resources within it. For a long time, it has been a challenge to limit what the service administrators could do within the resource group. For example, if you assigned them the permission to create a virtual machine they could create any size of virtual machine and name it anything they like. Another challenges have been two wide default security roles or two narrowed, for example if you assign service administrators the contributor role they could create any kind of resources within the resource group, but the virtual machine contributor role will not give permissions to work with public IP addresses for their services.

In this blogpost we will look into how ARM Policies and custom RBAC (Role Based Access Control) can be used to control what type of resources and how they are created within the resource group. In my example I have a resource group named CONTOSO-WE-AZFAE-RG. The naming convention is based on [ORGANIZATION]-[LOCATION West Europe in this example]-[service/workload in this example Azure Financial Analysis Engine]-[Azure Resource type in this example resource group].

The first thing is to assign permissions to the service administrators. In this scenario I want the service administrators team to have permissions to administrate everything around their virtual machines, including storage accounts, public IP addresses but not networking. The service administrators should also have permissions to connect (join) virtual machines to an existing virtual network hosted in another resource group. Azure Role-Based Access Control (RBAC) comes with a large number of security roles. Looking at the requirements in my scenario and the default roles, Virtual Machine contributor and Storage Account contributor are the two best options. I added the user account (could add a group too) as a virtual machine contributor and storage account contributor on the resource group level.


Now if ludvig@contoso.se, one of the service administrators, is trying to create a new virtual machine, including a new public IP and a Network Security Group, the deployment fails. This is due to the fact that the he only has permissions to virtual machines and storage accounts in the CONTOSO-WE-AZFAE-RG resource group. The Virtual Machine Contributor role have access to join machines to a subnet (Microsoft.Network/virtualNetworks/subnets/join/action) but the challenge in this scenario the VNET is hosted in another resource group. The Virtual Machine Contributor role is only for resources in the current resource group. Another challenge is that neither the Storage Account Contributor or the Virtual Machine contributor roles gives the user any permissions for public IP addresses or network security groups.


To give the service administrators team permissions to create and administrator both public IP addresses and Network Security Groups we will create a new custom security role, as none of the built-in security roles meet the scenario requirements. We will also create a new custom security role to give the service administrator permissions to join virtual machines to the VNET. Just like built-in roles, custom roles can be assigned to users or groups and applications at subscription, resource group or resource scopes. Building custom security roles are described here. In this scenario we will build two custom security roles, the first to handle NSG and public IP permissions and second custom security role to handle join virtual machines to VNET.

This is the definition of the first role.


This is the definition of the second role.


To import these roles into Azure, save the definitions as for example TXT files, then run use the New-AzureRMRoleDefination Azure PowerShell cmdlet to import the file. For example, New-AzureRmRoleDefinition -InputFile ‘C:\Azure\CustomRoleOne.txt’

Once both roles are imported, assign the user these roles. The “Contoso – Read and Join VNET” role must be assigned to the users on the resource group that contains the VNET that virtual machines will be connected to.



A service administrator can now deploy a new virtual machine, including network security group and public IP address, in the CONTOSO-WE-AZFAE-RG resource group. A virtual machine can also be attached/connected to a VNET in another resource group, handled with the new “Contoso – Read and Join VNET” security role.


All good so far J

But the challenge now is that service administrators can create a virtual machine of any size and can also give the server any name. All servers should start with “AZFAE-SRV” and use a VM size from the D-family. To solve this, we will use ARM policies. With policies we can focus on resources actions, for example restrict locations or settings on a resource. If you think about this scenario we use RBAC to control what actions a user can do on virtual machines, network security groups, public IP addresses and storage accounts. We can add policies to control how these resources are provisioned, controlling the settings, for example to make sure virtual machines are always deployed in west Europe. More information about policies here.

The following script setup a new policy that restrict virtual machine name. The last three lines of the script assign the new policy to the CONTOSO-WE-AZFAE-RG resource group.


The following script setup a new policy that restrict virtual machine size. The last three lines of the script assign the new policy to the CONTOSO-WE-AZFAE-RG resource group.


If one of the Service Administrators now try to create a virtual machine with a size not D1, D2, D3 or D4 an error will occur during deployment, saying that the deployment action is disallowed by one or more policies.


Summary: We have now a mix of standard RBAC roles and custom RBAC roles to setup the permissions our service administrators needed. We then use ARM policies to control HOW resources are configured and deployed. The next challenge would be control how much resources the service administrator deploys. RBAC gives us control of WHAT the administrator do, ARM policies gives us control of HOW they do it. But there is no mechanism to control how many resources. For example, the service administrator in this blog post example can now deploy hundreds of D servers, if they have correct name and size.  This could be solved with an external self-service portal including approval steps , but that is out of scope for this blog post J

If you want to review policy related events, you can use the Get-AzureRMLog cmdlet and look for Microsoft.Authorization/policies events, see here for more examples.

If you want to keep track of all access changes you can use the Get-AzureRMAuthorizationChangeLog cmdlet to access the change history log. For example “Get-AzureRMAuthorizationChangeLog -StartTime ([DateTime]::Now – [TimeSpan]::FromDays(7)) | FT Caller,Action,RoleName,PrincipalType,PrincipalName,ScopeType,ScopeName” will give you a list of all access changes last week. More information about this log here.


Disclaimer: Cloud is very fast moving target. It means that by the time you’re reading this post everything described here could have been changed completely .

Review Azure Automation jobs with PowerBI

In the Azure Portal, under a Azure Automation account, we can review automation jobs for example number of successfully jobs last seven days.  This is all good, but let’s say we need to know which service do we spend most automation minutes on? What source start most runbook jobs? How many minutes did each job take? Which jobs ran on a specific hybrid worker group? In this blogpost I will show an example of how this can be accomplished with PowerBI (PowerBI is an analytics service from Microsoft), a Azure SQL database and a runbook 🙂


All automation job data can be read with PowerShell, including some information we don’t see in the portal. This information can then be written, with PowerShell, to an Azure SQL database that PowerBI reads. We can then user PowerBI to drill down into the automation job data. This example includes two major steps

  1. A scheduled PowerShell based Azure Automation runbook gets all Azure Automation job data from the Azure Automation account. The data is in some case modified by PowerShell, for example some characters are replaced before they are stored in the database. PowerShell also calculate minutes spent per runbook job based on Start and End time from the runbook job data. The last part of the runbook writes the job data to the Azure SQL database
  2. PowerBI is configured to use the Azure SQL database as data source. PowerBI reads the data and present it in a web based dashboard that you can configure/design any way you want. In the figure below you can see an example of the PowerBI dashboard. On the right side of the figure you can see different parameters that can be used to filter the data and drill deeper into it.


In the figure below I have selected OMS as technology and all the other fields are adapted to only show OMS related information, for example only workers that have run a OMS related runbook.


If you look at the PowerBI figure you can see that we can filter on technology and service. This is based on tags configured on each runbook. The runbook in step one export this data too and writes it to the Azure SQL database. With these tags we can group runbooks together, based on owner, technology, service, integration or any other way we need to group them. In the figure below you can see how tags are configured on each runbook. If you run the example runbook with no tags on your runbooks the data export will work anyway, just that in PowerBI you will see “Not Configured” as service, technology and type of runbook.



Summary: We use a Azure Automation runbook to write automation job data to a Azure SQL database. PowerBI then reads the Azure SQL database and present the data in a easy way. You can then use PowerBI to drill down into the data


Note that this is provided “AS-IS” with no warranties at all. This is not a production ready solution for your production environment, just an idea and an example.

Download the example runbook here. Download SQL script to setup the database here.

Document Azure subscription with PowerShell

I would like to share an idea around documentation for Azure subscription, and hopefully get some ideas and feedback about it. What I see at customers is that documenting what resources are deployed to Azure is a challenge. It is also a challenge to easy get an overview of configuration and settings. Fortunately with Azure Power Shell we can easily get information about all resources in Azure. I have built an example script that export some settings from Azure and write them to a Word document.

The example script will export information about Virtual Machines, Network Interfaces and Network Security Groups (NSG). If you look in the script you can see examples of reading data from Azure and writing it to the Word document. You could of course read any data from your Azure environment and document it to Word. A benefit with a script is that you can schedule the script on intervals to always have an updated documentation of all your Azure resources.

Another thing I was testing was building Visio drawings with PowerShell. To do this I used this PowerShell module. The idea with this example is to read data from Azure and then draw a picture. In my example I included virtual machines and related storage accounts and network.

Download my example PowerShell scripts here.


Note that this is provided “AS-IS” with no warranties at all. This is not a production ready solution for your production environment, just an idea and an example.

Monitoring Azure Backup Server with Microsoft Operations Management Suite

In this post I would like to share some ideas around monitoring Azure Backup Server and backup jobs with Microsoft Operations Management Suite (OMS). OMS comes with a solution for Azure Backup. With this solution I can see that the Azure Backup vault protect 3 servers and is using a total of X GB. If I click on “3 registered servers” I can see that these three servers are my Azure Backup Servers. The machines that are being protected by the backup servers are not shown. As a backup administrator you often need to know more than number of backup servers and used space. In this blog post I will show you how to collect and visualize that information with OMS J

The first thing to do is to install the OMS agent on the Azure Backup Server. Once the agent installation is successfully completed it is time to configure OMS to collect DPM events. Add the DPM Backup Events, DPM Alerts and CloudBackup event logs under Settings/Data. But before any events are written to these event logs, Azure Backup Server needs to be configured to publish backup events and alerts. This configuration is done in the Microsoft Azure Backup console, in the Management workspace under Options.

Once backup related events are starting to come in to OMS it is time to configure filters to visualize what we want to see. The following filter will get all successfully backup jobs. Event ID 33222 is successful backup job and event id 33223 is failed backup job.

Type=Event EventLevelName=information EventID = 33222 TimeGenerated>NOW-8HOURS | sort Computer

But as you can see in the figure all values in the computer column is the Azure Backup Server. I would like to see what data source was protected and also on which server. To do this, you can use custom fields in OMS. With custom fields we can extract data from the event and index it as a new fields.

In the next two figures I have extracted protected server and data source from 33222 event and 33223 events, from the ParameterXML parameters. As you can see, we now have a column for the protected server and one column for the data source. We could combine this to one filter, showing both failed and successfully jobs. But I think it is better with two filters when we start using these filters in My Dashboard.

We could also run a query like this to get all machines and latest successfully backup

It can also be interesting with a filter to show all protected servers that don’t have a successful backup for last X hours. In my lab environment I have some events from before I extracted fields from the events, as you can see below.

Once we have our filters and saved them to favorites we can use them in My Dashboard. We now have a quick start overview of out backup jobs on the Azure Backup Server. Of course you can add a number of filters to get more information to your dashboard.

Free E-book: Inside the Microsoft Operations Management Suite

Tao (@MrTaoYang), Stan (@StanZhelyazkov), Pete (@pzerger) and I have been working on a project for the last few weeks. We wanted to bring a learning resource for the MS Operations Management Suite to the community that is complete, comprehensive, concise…and free (as in beer). While we finish final editing passes over the next couple of weeks, we wanted to share an early copy of the book so you can start digging in while we finish our work!

Description: This preview release of “Inside the Microsoft Operations Management Suite” is an end-to-end deep dive into the full range of Microsoft OMS features and functionality, complete with downloadable sample scripts (on Github). The chapter list in this edition is shown below:

  • Chapter 1: Introduction and Onboarding
  • Chapter 2: Searching and Presenting OMS Data
  • Chapter 3: Alert Management
  • Chapter 4: Configuration Assessment and Change Tracking
  • Chapter 5: Working with Performance Data
  • Chapter 6: Process Automation and Desired State Configuration
  • Chapter 7: Backup and Disaster Recovery
  • Chapter 8: Security Configuration and Event Analysis
  • Chapter 9: Analyzing Network Data
  • Chapter 10: Accessing OMS Data Programmatically
  • Chapter 11: Custom MP Authoring
  • Chapter 12: Cross Platform Management and Automation

Download your copy here!

SharePoint Online as frontend for Azure Automation

Back in the Orchestrator days we had the Service Manager self-service portal that we could use to submit items that trigger runbooks in Orchestrator. The integration between Service Manager and Orchestrator worked great and the self-service portal brought a lot of value to automation scenarios. But time change and now we have a new executor in Azure Automation J

The challenge is that there is no connector between Azure Automation and Service Manager or any other portal. In this blogpost we will look at how we can use SharePoint Online as a frontend to Azure Automation. The process of a new request will be:

  1. User submits a new item in a SharePoint list
  2. A SharePoint workflow trigger a Azure Automation runbook
  3. Azure Automation does magic
  4. Azure Automation update the list item in SharePoint
  5. The user sees the result in the SharePoint list

Setting up SharePoint

  1. Sign-in to SharePoint online
  2. Add a custom list, click on the Add list tile
  3. Download and install SharePoint Designer on your workstation
  4. Once you have installed SharePoint Designer, click on Edit List in the List toolbox, SharePoint Designer will start and load your SharePoint site
  5. In SharePoint Designer click Edit List columns
  6. In the Edit List view, use the Add New Column and Column Settings to configure the list as you need it to be. In my example I have a list with a number of fields that are needed to create a new service account in Active Directory. I have also added a column named Result that Azure Automation use to write back the result from the runbook. There is also a column named Azure Automation Status that is used to report back the response when submitting the job to Azure automation. The SP Workflow and the column will be automatic created when we connect a workflow to the list.
  7. When the list is as you like it, click SAVE and go back to SharePoint and refresh the page

The list is now created. You can click New Item in the list view and submit new items. You can click Edit this view and add the ID column. The runbook will use the ID field to keep track of which list item to work with.

Setting up Azure Automation

Next step is to setup the Azure Automation runbook and configure the webhook. More general information about webhooks can be found here.

  1. First thing we need to do is configure Azure Automation with a SharePoint Online module. Tao Yang have a good blog post about this. Tao blogpost is about import the module in SMA, but that you should not try to do J instead only follow Tao steps to build the ZIP file. You can download his ZIP file and then you add the two DLL files that he also link too.
  2. Once you have the complete ZIP file, browse to your automation account in Azure Automation and click on Assets and then Modules
  3. On the Modules page, click Add a module, and upload the ZIP file. Remember that if you are planning to use a Hybrid worker the module must be installed on all hybrid workers too
  4. After the module is imported you need to setup a connection to your SharePoint site, for example. Remember that the service account used cannot be configured with two factor authentication, the account also has to have permissions on the SharePoint site.
  5. I have put together an example runbook for this scenario, which can be found here. It will first show/output you all data that comes from the webhook. It will then connect to SharePoint and get the current list and list item. In the end of the runbook an account is created and a hash table is created to update back into SharePoint. Either use my example runbook or build a new runbook.
  6. Next step, after the runbook is in place, is to create a webhook, click on the runbook, click webhook and add a new webhook. Remember to copy the webhook URL before clicking Create

Configure the SharePoint workflow

It is now time to configure the SharePoint workflow that will trigger the runbook when a new list item is created.

  1. Open SharePoint Designer and load your SharePoint site
  2. In SharePoint Designer, click Lists and Libraries on the left side, then click on your list
  3. On the right side on SharePoint Designer, click New… next to Workflows
  4. Name the new workflow, for example Workflow 0003 in my example. Use SharePoint 2013 Workflow as platform type
  5. When the workflow is created, configure it to start automatically when a new item in the list is created
  6. Click Edit Workflow to start build the new workflow
  7. The runbook, when complete, should look like this
  8. The first step is to build a Directory, map list fields with variables
  9. Next step is call the runbook webhook, paste the webhook URL. Remember to change to HTTP POST

  10. The last step in the workflow post the response from Azure Automation back to a column in a list. When the workflow do the HTTP POST to trigger the runbook a message is sent back, that is the message that you will write back to a column. This will be a simple log if the job has been submitted to Azure Automation successfully
  11. When all steps in the workflow are configured, click SAVE and then PUBLISH in SharePoint Designer.

Testing 1-2-3

We have now built a list in SharePoint, we have built a workflow in SharePoint that will invoke a runbook. The runbook performs some action, in this example it creates an AD account, and sends the results back to the list in SharePoint.

We fill in the list item. I guess that with a bit more SharePoint skills it would be possible to hide the two last test fields when filling in the information. Those two fields are only used to store status.

After a couple of seconds we can see that the workflow has run (Stage1) and that there is a response from Azure Automation when triggering the webhook (Accepted)

In Azure Automation we can see that the jobs has completed. We can see a lot of info as Output from the runbook. In a production runbook you might want to scale down all the extra code and output J

If we go back to SharePoint and do a refresh we can see there is a result saying the account already exists in AD and no new account has been created. If we submit a new item with request for another account is works, the new account is created J

If you would like to add approval steps to your solution, read more here.

OMS black belt Jakob also have great ideas about using SharePoint Online that I recommend, read it here

Moving a VHD between storage accounts

Moving a virtual machine (VM) between storage accounts sounds like an easy task, but can still be a bit complicated J In this blog post I will show how this can be done with AzCopy. AzCopy is a command-line utility designed for high-performance uploading, downloading, and copying data to and from Microsoft Azure Blob, File, and Table storage. Read more about AzCopy and download it here. When using AzCopy you need to know the key for your storage account. This key can be found in the Azure portal, both the preview portal and the classic portal.

When you are copying a VHD file for a VM you need to know the storage account URL and name of the VHD file. This can be found

  • Classic Portal. Click Virtual Machines, click Disks and look at the Location column. The first part of the Location URL is the name of the storage account.
  • Preview Portal. Click at a virtual machine, click All settings, click Disks, click on the disk and look at VHD location. The first part of the Location URL is the name of the storage account.
  • If your target is a storage account in the v2, the preview portal, and you need to know the URL to the storage account, click the storage account, click All Settings, Access Keys and then look at the Connection String. In the connection string you will find BlobEndPoint, this is the URL to use as destination.

In this scenario I will move a VHD located at https://psws5770083082067456747.blob.core.windows.net/vhds/geekcloud-geekcloud-dc01-2013-11-22.vhd to a new storage account in v2, https://contosowelr001.blob.core.windows.net/

  1. Start a Command Prompt
  2. Go to the C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy folder
  3. Run the following command to start the copy job. In the image I have covered the two keys, but I think you still get the command JAzCopy.exe /source:<The https:// URL to your storage account and container> /Dest:<The https:// URL to your destination storage account and container> /sourcekey:<The primary key for the source storage account> /destkey:<The primary key for the target storage account> /pattern:<file name>
  4. Wait…
  5. When the copy job is completed verify it was successfully

If you want to get the files you have in an Azure storage account, you can use PowerShell. In this example we first setup storage account context and then list files for each container in the storage account.

$blob_account  = New-AzureStorageContext -StorageAccountName contosowelr001 -StorageAccountKey xl6DMUmnbKvr -Protocol https
Get-AzureStorageContainer -Context $blob_account | ForEach-Object {
$container = $_.Name
get-azurestorageblob  -Container $container -Context $blob_account | Select Name

You can also see the files in the Azure portal, under the storage account/blobs/containers/

”Run As” with Azure Automation Hybrid Worker

Runbooks in Azure Automation cannot access resources in your local data center since they run in the Azure cloud. The Hybrid Runbook Worker feature of Azure Automation allows you to run runbooks on machines located in your data center in order to manage local resources. The runbooks are stored and managed in Azure Automation and then delivered to one or more on-premise machines where they are run. Source.

By default the runbook will run in the context of local system account on the Hybrid Runbook Worker. This might be a challenges, as the computer account is seldom assigned any permissions, even if possible. The scenario I was working included creation of a service account in an on-premises domain. My first idea was to change the service account for the Microsoft Monitoring Agent service. But that did not work out very well L

Next idea was to in my Azure Automation runbook do a remote session to a server with domain account as credentials. The runbook shown in this blogpost is an example of how to do a remote session within an Azure Automation runbook. The runbook use two input parameters, first name and last name. The runbook picks up the account (SKYNET Super User) that will be used to remote connect to a domain controller. The account is stored encrypted in Azure Automation as an asset. The inline script session returns the name of the new user account, which is also returned as an output (Write-output) from the runbook.

I start the runbook from the Azure Portal, input the two parameters. A short while later I can see the job is completed and the output from the runbook. I can also see the new account in Active Directory.


Note that this is provided “AS-IS” with no warranties at all. This is not a production ready management pack or solution for your production environment, just an idea and an example.

Network Security Groups – Getting a overview

I was working with Network Security Groups (NSG) earlier this week. The environment included multiple VNET, subnets, NSG and association on both VMs and Subnets. It quick became complicated to keep track of what has been configured, associations and NSG rules. Therefor I created a PowerShell script that generates a HTML based report that gives me an overview. I thought I should share this with the community, even if it is a “quick-hack-with-bad-written-code” J

To run the script, start Azure PowerShell and set up a connection to your Azure subscription. Also make sure you have a C:\TEMP folder. The script will export your Azure network configuration and read it. The script will also query your Azure subscription for information for example virtual machines with NSG associated. The HTML file will be named C:\temp\net.htm.

First part of the HTML page is an overview of VNET, subnets, Address prefix and associated NSG. The second part of the HTML page is an overview of NSG rules in use. In this example I have three NSG in use, two associated with subnets and one associated with two virtual machines. There is also one NSG that is not in use at all.

Download the script List NSGv2.



Note that this is provided “AS-IS” with no warranties at all. This is not a production ready management pack or solution for your production environment, just an idea and an example.