Azure Automation – Up / Down Scale Azure VM

car

Recently I made a couple of demos how you could benefit from Azure Automation acting on certain alerts. One demo was, if an OMS or Azure Monitor alert is triggered because of high CPU on your Azure VM, a webhook will be called and scale up the Azure VM to a predefined size.

webhook-alert

In Azure Monitor you are able to create an Alert and set the threshold to e.g. 80%…

image

…and then call a webhook…

image

Note: Azure Monitor has already built-in alert action to start / stop a virtual machine or even scale up / down. These actions install and call pre-configured Azure Automation runbooks from Microsoft. But this post should show you in a transparent way how you can achieve it, doing it yourself.

As you probably know in OMS you are also able to provide a webhook in your alert settings…

image

Which ever way you choose Azure Monitor or OMS, it does not matter. I just want to provide you the PowerShell script I used in Azure Automation. Although it is far from complete and production level, but as demo it serves it’s purpose Smile.

# Get automation credential for authenticating and resizing the VM
$Credential = Get-AutomationPSCredential -Name 'AzureScaleUser'
# Size of the VM like Standard_D1_v2, Standard_D2_v2
$HWProfile = Get-AutomationVariable -Name 'ScaleUpSize'
# Resource group where the VM is living in
$ResourceGroup = Get-AutomationVariable -Name 'ResourceGroup'
# Subscription which hosts the VM / account etc.
$SubscriptionId = Get-AutomationVariable -Name 'SubscriptionID'
# VM name you want to up / down scale
$VMName = Get-AutomationVariable -Name 'VMName'
# Login to azure
Add-AzureRMAccount -Credential $Credential  -SubscriptionId $SubscriptionId
# Get the VM
$VM = Get-AzureRmVM -Name $VMName -ResourceGroupName $ResourceGroup

If ($VM.HardwareProfile.VmSize -eq $HWProfile)
 {
     Write-Output "HW size already set to $($VM.HardwareProfile.VmSize)"
 }
 Else
 {
     Write-Warning "Scaling up to $HWProfile"
	 # Set new VM size
     $VM.HardwareProfile.VmSize = $HWProfile
	 # Update VM
     Update-AzureRmVM -VM $VM -ResourceGroupName $ResourceGroup
     Write-Output "HW scaling up to $($VM.HardwareProfile.VmSize)"
 }

My Azure Automation variables look like this…

image

I hope this helps you for your next demo / PoC or what ever project you need it for.

ARM Template – Deployment Error “[Subscription().SubscriptionId]” The request is invalid…

Error

Recently I authored some ARM stuff in Visual Studio and I had a need for getting the current subscription ID the template is being deployed to.  So there is a helper function you can call like this…

“subscriptionId”: “[subscription().subscriptionId]”

I used it in the template like this…

2

…but as soon I tried to deploy the template, I hit this error all the time…

“code”: “BadRequest”, “message”: “{\”Message\”:\”The request is invalid.\”,\”ModelState\”:{\”variable.properties.value\”:[\”Invalid JSON primitive: 328de222-1a51-458a-96be-6770259e84c0.\”]}}”

3

I am not sure why this happen, but I figured out a workaround. If I concatenate the entire subscription id, it would work like this…

1

I hope this saves you some headache!

OMS – Azure Scheduler Solution

Bildergebnis für Azure Scheduler logo

UPDATE 07.08.2017 21:51: I found a bug in the ARM template which made the dashboard not appearing. I fixed it just now, in case you deployed the solution, just redeploy it. Sorry for that hassle.

Currently I am doing some more OMS stuff and therefore I also took a deeper dive at building ARM templates to deploy an OMS solution. I was looking for a simple Azure service to gather data from, which I could ingest into OMS. My goal was, to have a use case, where I only need to provide the minimal parameters necessary and the rest should be done by the ARM template.

How does it work?

Well, basically there is Azure Automation which runs a PowerShell script on an hourly schedule to collect data from Azure Scheduler service. If there are any collections and jobs in Azure Scheduler it will ingest the data into OMS via API.

The OMS solution will contains the following views:

  • Jobs with errors
  • Status of the jobs
  • Jobs and how many times it has been executed
  • How many jobs a collection contains
  • Some useful queries

OMSAzureScheduler

How do I deploy it?
Go to my https://github.com/stefanrothnet/AzureScheduler

image

You need to provide the credentials to access Azure Scheduler service, these will be saved in the Azure Automation account. Make sure the credentials have permission to access the subscription you are accessing. In addition you need to provide a schedule link GUID. Because there is no function in ARM template to generate a GUID, we need manually to provide a GUID. This GUID is needed to link the Azure Automation schedule to the Azure Automation runbook. Use PowerShell cmdlet New-Guid to generate a GUID and paste it into to the settings.

image

The template does the following steps:

  • Creates a resource group
  • Creates an Azure Automation account
  • Deploys the PowerShell runbook / script to collect Azure Scheduler data
  • Creates an Azure Automation schedule to run the runbook to collect the data. It starts every one hour starting at deployment.
  • Creates an Azure Automation variables for OMS workspace and key
  • Creates an Azure Automation  variable for the current subscription id
  • Creates an Azure Automation credential with username and password
  • Updates the AzureRM.Profile and AzureRM.Scheduler modules
  • Installs the OMSIngestionsAPI module
  • Deploys an OMS workspace and installs the solution into the workspace

If you have tried to create such a solution before or any other ARM project, you know, there are many problems you will face.

So what is the current state of this solution?

  • All the necessary components are being deployed and are working (I tested it only in the West Europe Azure location!)
  • There are some parts with the OMS dashboard I need to update and adjust, but for the moment it works and offers a good demo case for an OMS solution.
  • Be aware, it is not a production ready product, it is made for learning and testing. I tested it only briefly and I am not an Azure Scheduler MVP 😉 .

If you encounter any problems or things that don’t appear the way they should, let me know. Have fun!

OMS – Disconnect Azure Storage Account from Workspace

Whereisit

In OMS you are able to collect data from storage account? Why is this useful? Well, there times where you want to store data from different Azure sources for a longer time than provided by Azure itself and then dig into the data using OMS. For example you are able to store IIS Logs, Windows Events, Syslog (Linux), Windows Tracing Logs (ETW Logs) or Service Fabric Events. In the past days you could just configure the settings within the OMS portal itself.

StorageAccount

In the current OMS portal you simply see something like this…

image

…the documentation link does not provide much help in terms of connecting or removing these accounts. Therefore go to the new Azure portal, select your workspace and select “Storage account logs” and click Add

Continue reading

SCOM / OMS – MP University 2017 Recording

Sielct

Yes, Silect did it again! Few days ago Silect Software provided MP University 2017, an online event packed with sessions from well known names like Kevin Holman, Brian Wren and Aditya Goda from Microsoft, Marnix Wolf from Didacticum and Mike Sargent from Silect. What I like about this event is, that it is not marketing instead the sessions are packed with very deep content of MP authoring and as it seems to start touching OMS as well Smile. If you missed this event I encourage you to watch the recordings online on Youtube.

MP Authoring Basics and Silect MP Author

 

MP Authoring using Fragments

Continue reading

Azure Zurich User Group – Speaker

I am very happy to have a session at the Azure Zurich user group meetup in Zürich, Switzerland. I will have a session about Microsoft Operations Management Suite (OMS) which will give you an overview how OMS works and what it is capable of in a private, hybrid and public cloud scenario. The session is split into two parts to have enough room for discussions and some drinks. Please join us and share this user group event. You can find all information on their Meetup site here.

image

Hope to see you there!

OMS – OMS, is it SCOM in the cloud?

on-premise-vs.-cloud

I can recall many instances whilst attending conferences and talking with customers or colleagues whereby misunderstandings have caused a significant amount of confusion.

“Operations Management Suite is SCOM in the cloud”

This is a one that has been doing the rounds lately, but it is correct? To answer the question we need to do a bit of digging into the past. André Malraux once said,

“Who wants to read in the future, must scroll in the past.”.

System Center Operations Manager (SCOM) was and is the Microsoft monitoring solution for homo- and heterogeneous IT environments. SCOM was originally developed by NetIQ, then purchased in 2000 by Microsoft. It carries with it a 17-year evolution, which started when the product was called Microsoft Operations Manager (MOM). In 2007 «MOM» was completely rewritten on a flexible and extensible framework SCOM was born. The development has continued ever since and the latest available version is SCOM 2016.
About 6 years ago, Microsoft began to experiment with System Center Advisor, an agent-based assessment and best practice analyzer solution based in the cloud. It provided the ability to analyze different workloads such as Windows operating system, SQL Server, Active Directory and Hyper-V components, detect changes to IT infrastructure, and propose Microsoft best practices from in the form of alarms. Between 2012 and 2013 the range of supported technologies was extended to include Exchange, SharePoint and Lync. Initially a separate solution, it quickly became integrated into SCOM 2012 SP1 by means of a connector. The newly generated information retrieved from Azure became available both on-premise within SCOM and in the cloud through System Center Advisor extension. By SCOM 2012 R2, the connector came pre-bundled as part of the suite. In 2014 System Center Advisor was transformed, gone was the Silverlight-based web application and in came a new HTML 5 based web app with a host of new capabilities. This meant that the Best Practice Analyzer System Center Advisor could be integrated into a new product called Azure Operational Insights, the range of capabilities for which could be greatly expanded by the use of so-called Intelligence Packs (IP). The following packs were released as part of the initial deployment:

  • Configuration Assessment
  • Malware Assessment
  • Capacity Planning
  • Change Tracking
  • Log Management
  • SQL Assessment
  • System Update Assessment

A new key feature acted like a cloud-based «data pot» whereby data was collected using an agent and could be analyzed with a PowerShell-like syntax within Azure Operational Insights Search Data Explorer.  A connection to SCOM was also ensured by a SCOM connector. In addition, the Operational Insights product is now the foundation for today’s current Operations Management Suite (OMS). Operational Insights Search Data Explorer is called Azure Log Analytics and Intelligence Packs are called solution (packs).
Since we now know the background of both products, I would like to juxtapose their facts, in order to be able to answer the question objectively.

Concept
SCOM consists of an extensible hierarchical object model. This means that components that are to be monitored in SCOM can be discovered (Discovery) by means of management packs (XML files) and placed into a hierarchy (service model) using relationships. Sensors (monitors) can move a subordinate object, into a healthy state, or into a faulty (unhealthy) state and visually represent it. The health state can be passed to its parent object (rollup). This model is described as a health model and has many advantages as well as certain disadvantages.

OMS works with so-called flat data, this means the data exists as data records in a large data pot. There are no objects or relationships among the collected data. For example, solution 1 collects disk information from computer X. At the same time solution 2 collects information on the same disk, BUT there is no relationship nor knowledge of the status between the disk data from solution 1 and solution 2. OMS does not (yet) have any service model and therefore also no health model.

Continue reading