Home » Search results for 'rule performance data'
Search Results for: rule performance data
UNIX/Linux Shell Command as Performance Data in OM12
Tonight I created a example on how to use a Linux/UNIX command to return data as performance data in OM12. In this example we build a rule that count files in a folder and returns it as performance data.
- In the Operations Manager console, navigate to Author/Management Pack Objects/Rules
- Right-click rules and create a new Rule, select Collection Rules/Probe Based/UNIX/Linux Shell Command (Performance) rule
- Select a management pack
- General, input a name and a rule target. For example “Field – X plat – Number of files” as name and “SUSE Linux Enterprise Computer” as monitor target
- Schedule, select how often you want the rule to run the command, for example every 15 minute
- Shell Command Details, input the script you want to use. The command can be a path to a binary or script file or a single-line shell command. In this example we can the following command to count files in the /tmp folder
find /tmp | wc -l - Filter Expression, use default settings and click Next
- Performance Mapper, input name of object, counter and instance, for example
Object: Folder Management
Counter: Number of files
Instance: /tmp - Click Create and your rule is ready!
You can now navigate to a performance view and show the data collected by the rule (might need to wait some minutes depending on your rule settings)
Look for new databases (…with a pinch of DPM)
With SQL Server Audit, SQL Server 2008 introduces an important new feature that provides a true auditing solution for enterprise customers. While SQL Trace can be used to satisfy many auditing needs, SQL Server Audit offers a number of attractive advantages that may help DBAs more easily achieve their goals such as meeting regulatory compliance requirements. These include the ability to provide centralized storage of audit logs and integration with System Center, as well as noticeably better performance. Perhaps most significantly, SQL Server Audit permits fine-grained auditing whereby an audit can be targeted to specific actions by a principal against a particular object. This paper provides a comprehensive description of the new feature along with usage guidance and then provides some practical examples. Source MSDN
If you want to get an alert when a new database is created in SQL 2008 you will first need to configure auditing on SQL side, and then a rule in Operations Manager to generate an alert. Configure a new audit with audit destination equals Application log or security log. If you select security log you might need to configure some extra security permissions. Create a new server audit policy, configure audit action type equals DATABASE_CHANGE_GROUP.
Next step is to create the rule that will pickup the SQL event and generate an alert. Create a new event based rule, target it to for example SQL Servers, to monitor all your SQL machines. Configure the rule to look for event ID 33205 including CREATE and DATABASE in the event description.
In the expression of the rule, we use “.” to tell Operations Manager “any character before, around or after the two keywords, CREATE and DATABASE.
Another step that you might want to do is to verify that you backup the new SQL database. I modified a power shell script and the result is that it will connect to your DPM server and a SQL box, it will then ask you if you want to add any of the unprotected databases on that server to a protection group in DPM.
param([string] $ProductionServer, [string] $PGName)
if(!$ProductionServer)
{
$ProductionServer = read-host "Enter the production server name (a SQL server protected by DPM)"
}
if(!$PGName)
{
$PGName = read-host "Enter the name of your existing SQL protection group name"
}
$dpmservername = read-host “Enter the name of your DPM server”
connect-dpmserver $dpmservername
$dpmservername
$PGList = @(Get-ProtectionGroup $dpmservername)
foreach ($PG in $PGList)
{
if($PG.FriendlyName -eq $PGName)
{
write-host “Found protection group $PGName”
$MPG = Get-ModifiableProtectionGroup $PG
$PGFound=$true
}
}
if(!$PGfound)
{
write-host “Protection Group $PGName does not exist”
exit 1
}
$PSList=@(Get-ProductionServer $dpmservername)
$DsList = @()
foreach ($PS in $PSList)
{
if($PS.NetBiosName -eq $ProductionServer)
{
write-host “Running Inquiry on” $PS.NetbiosName
$DSlist += Get-Datasource -ProductionServer $PS -Inquire
$PSFound=$true
}
}
if(!$PSfound)
{
“Production Server $PS does not exist”
exit 1
}
$protectedDsList = @()
foreach ($ds in $dslist)
{
if($ds.ToString(“T”, $null) -match “SQL” -and !$ds.Protected)
{
$toadd = read-host “Do you want to protect the” $ds.Name “database? (y/n)?”
If ($toadd -eq “y”)
{
$protectedDsList += $ds
Add-ChildDatasource -ProtectionGroup $MPG -ChildDatasource $ds
$x=Get-DatasourceDiskAllocation -Datasource $ds
Set-DatasourceDiskAllocation -Datasource $x -ProtectionGroup $MPG
}
}
}
Set-ReplicaCreationMethod -ProtectionGroup $MPG -Now
if($protectedDsList.Length)
{
write-host “Adding new SQL DBs to” $MPG.FriendlyName
Set-protectiongroup $MPG
}
disconnect-dpmserver $dpmservername
“Exiting from script”
(tested in a sandbox, so I am aware that the ops mgr databases are not protected and all the test databases) If you want to integrate the script into Ops Mgr you should read this post from David Allen.
Performance Reports and Groups
When running a performance report against a group you get a average value for all the members of the group. Often you need the report to specify each member of the group in the report. You can of course add each member of the group as object to the report. Another solution is to build a report where you can input a group. That will save you some time if you already have updated groups you want to run reports against. The following query can be run against a group that contains computer objects, it will then find the members of the group and run the report against each of them.
SELECT vManagedEntity.ManagedEntityGuid, vManagedEntityTypeImage.Image, Perf.vPerfHourly.DateTime, Perf.vPerfHourly.SampleCount, Perf.vPerfHourly.AverageValue,
Perf.vPerfHourly.StandardDeviation, Perf.vPerfHourly.MaxValue, vManagedEntity.FullName, vManagedEntity.Path, vManagedEntity.Name,
vManagedEntity.DisplayName, vManagedEntity.ManagedEntityDefaultName, vPerformanceRuleInstance.InstanceName, vPerformanceRule.ObjectName,
vPerformanceRule.CounterName
FROM Perf.vPerfHourly INNER JOIN
vManagedEntity ON Perf.vPerfHourly.ManagedEntityRowId = vManagedEntity.ManagedEntityRowId INNER JOIN
vManagedEntityType ON vManagedEntity.ManagedEntityTypeRowId = vManagedEntityType.ManagedEntityTypeRowId LEFT OUTER JOIN
vManagedEntityTypeImage ON vManagedEntityType.ManagedEntityTypeRowId = vManagedEntityTypeImage.ManagedEntityTypeRowId INNER JOIN
vPerformanceRuleInstance ON vPerformanceRuleInstance.PerformanceRuleInstanceRowId = Perf.vPerfHourly.PerformanceRuleInstanceRowId INNER JOIN
vPerformanceRule ON vPerformanceRuleInstance.RuleRowId = vPerformanceRule.RuleRowId
WHERE (vPerformanceRule.CounterName LIKE N'%Available MBytes%') and (Perf.vPerfHourly.DateTime > @ReportParameter2 and Perf.vPerfHourly.DateTime < @ReportParameter3)
and vManagedEntity.ManagedEntityGuid in (
select BMETarget.BaseManagedEntityId from OperationsManager.dbo.BaseManagedEntity BMESource
inner join OperationsManager.dbo.Relationship R
on R.SourceEntityId = BMESource.BaseManagedEntityId
inner join OperationsManager.dbo.BaseManagedEntity BMETarget
on R.TargetEntityId = BMETarget.BaseManagedEntityId
inner join OperationsManager.dbo.ManagedType MT
on BMETarget.BaseManagedTypeId = MT.ManagedTypeId
where MT.TypeName = 'Microsoft.Windows.OperatingSystem'
and BMESource.BaseManagedEntityId in (
select BMETarget.BaseManagedEntityId from OperationsManager.dbo.BaseManagedEntity BMESource
inner join OperationsManager.dbo.Relationship R
on R.SourceEntityId = BMESource.BaseManagedEntityId
inner join OperationsManager.dbo.BaseManagedEntity BMETarget
on R.TargetEntityId = BMETarget.BaseManagedEntityId
Where BMESource.DisplayName = @Group)
)
ORDER BY Perf.vPerfHourly.DateTime
In this example the report will show the “Available MBytes” performance counter for a group that you input as parameter @Group. It will show data between @ReportParameter2 and @ReportParameter3 (dates). I get all groups from the database by this query
Select DISTINCT BMESource.DisplayName as [Group Name]
From OperationsManager.dbo.BaseManagedEntity BMESource
Inner Join OperationsManager.dbo.Relationship R
On R.SourceEntityId = BMESource.BaseManagedEntityId
Inner Join OperationsManager.dbo.BaseManagedEntity BMETarget
On R.TargetEntityId = BMETarget.BaseManagedEntityId
The two date parameters, @ReportParameter2 and @ReportParameter3 I get from two queries that returns current data and current date minus seven days.
SELECT convert(date,getdate(),21)
SELECT convert(date,dateadd(day,-7,getdate()),21)
In my report I also added a matrix to show the values. I added the following line as BackgroundColor on the data value cell. This will give me a red background on every value below 100, in this example each time a machine had less than 100 Mb free memory. =iif(Fields!AverageValue.Value < 50, "Red", "White")
Big thanks to Mike Eisenstein for good ideas and SQL help.
ACS Collector Performance Counters
After you have implement Audit Collection Services in your environment you must start monitor the collector. There is a number of performance counters that can help you with this:
- Connected Clients
- Database Queue % Full
- Database Queue Backoff Threshold in %
- Database Queue Disconnect Threshold in %
- Database Queue Length
- DB Loader Event Inserts/sec
- DB Loader Principal Inserts/sec
- DB Loader String Inserts/sec
- DB Principal Cache Hit %
- DB Request Queue Length
- DB String Cache Hit %
- Event time in collector in milliseconds
- Incoming Events/sec
- Interface Audit Insertions/sec
- Interface Queue Length
- Registered Queries
If you want to collect this performance data and review it for analyse and plannig, you can first create a collection rule and then a performance view.
- Start the console, click Authoring
- Right-click Rules and choose to create a new rule
- Rule Type: Choose to create a Collection Rules/Performance Based/Windows Performance. Click Next
- General: Input a rule name, description and choose “Microsoft Audit Collection Services Collector” as rule target, then click next.
- Performance Counter: input “ACS Collector” as object. If you want to collect for example number of events inserted in the database per second, select “DB Loader Event Inserts/sec” as counter. Select “Include all instances for the selected counter”. Configure the interval and then click Next
- Optimized Collection: Choose to use optimization or not, click Create
If you want to view the collected data you can create a performance view, and configure it to show data related to “Microsoft Audit Collection Services Collector”.
Script That Returns Performance Data
I have made a script that will search for a word in a logfile and send the result back as performance data to Ops Mgr 2007.
With default settings will the script look in the IIS log files (C:\windows\system32\LogFiles\W3SVC1\) for the last hour and search for the word “POST”. It will then return how many times that word is in the file as performance data.
To use this script you can create a new Collection Rules/Probe Based/Script (Performance), paste the script into the “script box”, input suitable performance mapping information. If you change the “Value” (default $Data/Property[@Name=’PerfValue’]$) you must also change this in the script. Remeber to configure your IIS to generate a logfile for every hour, or modify the script.
I have configure my script to run every 70 minutes synchronize at 00:00. It will then run 10 minutes past every hour and tot up the hour before.
You can download the script here. Send me a e-mail if you have any feedback or suggestions.
Check two or more performance counters
I have done a script that will check two performance counters and generate an alert if both are over threshold values. It simple to add more performance counters and change the threshold values. I have test it on Windows Server 2003 and Window XP 32bit. Its fairly basic, but it will show you a way to generate an alert based on two or more performance counters.
strComputer = “.”
Set objWMIService = GetObject(“winmgmts:” _
& “{impersonationLevel=impersonate}!\\” & strComputer & “\root\cimv2”)
set objRefresher = CreateObject(“WbemScripting.SWbemRefresher”)
Set colItems = objRefresher.AddEnum _
(objWMIService, “Win32_PerfFormattedData_PerfOS_System”).objectSet
objRefresher.Refresh
For i = 1 to 2
For Each objItem in colItemsstrProc = objItem.Processes
strCPUQL = objItem.ProcessorQueueLengthIf strProc > 40 AND strCPUQL > 0 Then
Const EVENT_WARNING = 2
Set objShell = CreateObject(“Wscript.Shell”)
objShell.LogEvent 2, “Number of processer: ” & strProc & “. Processor Queue Lenght: ” & strCPUQL
End If
objRefresher.Refresh
Next
Next
1. Create a new script and paste the script source
2. Create a new event rule to run the script every X minute as respond
3. Create a new event rule to collect event ID 2 and event type of Warning
4. Commit configuration change
Monitoring Windows services with Azure Monitor
Another question we are asked regularly is how to use the Azure Monitor tools to create visibility on Windows service health. One of the best options for monitoring of services across Windows and Linux leverages off the Change Tracking solution in Azure Automation.
The solution can track changes on both Windows and Linux. On Windows, it supports tracking changes on files, registry keys, services, and installed software. On Linux, it tracks changes to files, software, and daemons. There are a couple of ways to onboard the solution, from a virtual machine, Automation account, or an Azure Automation runbook. Read more about Change tracking and how to onboard at Microsoft Docs.
This blog post will focus on monitoring of a Window service, but the concept works the same for Linux daemons.
Changes to Windows Services are collected by default every 30 minutes but can be configured to be collected down to every 10 seconds. It is important that the agent only track changes, not the current state. If there is no change, then there is no data sent to Log Analytics and Azure Automation. Collecting only changes optimizes the performance of the agent.
Query collected data
To list the latest collected data, we can run the following query. Note that we use “let” to set offset between UTC (default time zone in Log Analytics) and our current time zones. An important thing to remember is what we said earlier; only changes are reported. In the example below, we can see that at 2019-07-15 the service changed state to running. But after this record, we have no information. If the VM suddenly crashes, there is a risk no “Stopped” event will be reported, and from a logging perspective, it will look like the service is running.
It is therefore important to monitoring everything from a different point of views, for example, in this example also monitor the heartbeat from the VM.
let utcoffset = 2h; // difference between local time zone and UTC
ConfigurationData
| where ConfigDataType == "WindowsServices"
| where SvcDisplayName == "Print Spooler"
| extend localTimestamp = TimeGenerated + utcoffset
| project localTimestamp, Computer, SvcDisplayName, SvcState
| order by localTimestamp desc
| summarize arg_max(localTimestamp, *) by SvcDisplayName
Configure alert on service changes
As with other collected data, it is possible to configure an alert rule based on service changes. Below is a query that can be used to alert if the Print Spooler service is stopped. For more steps how to configure the alert, see Microsoft Docs.
ConfigurationChange
| where ConfigChangeType == "WindowsServices" and SvcDisplayName == "Print Spooler" and SvcState == "Stopped"
You may be tempted to use a query to look for Event 7036 in the Application log instead, but there are a few reasons why we would recommend you use the ConfigurationChange data instead:
- To be able to alert on Event 7036, you will need to collect informational level events from the Application log across all Windows servers, which becomes impractical very quickly when you have a larger number of Virtual Machines
- It requires more complex queries to alert on specific services
- It is only available on Windows servers
Workbook report
With Azure Monitor workbooks, we can create interactive reports based on collected data. Read more about Workbooks at Microsoft Docs.
For our service monitoring scenario, this is a great way to build a report of current status and a dashboard.
The following query can be used to list the latest event for each Windows service on each server. With the “case” operator, we can display 1 for running services and 0 for stopped services.
let utcoffset = 2h; // difference between local time zone and UTC
ConfigurationData
| where ConfigDataType == “WindowsServices”
| extend localTimestamp = TimeGenerated + utcoffset
| extend Status = case(SvcState == “Stopped”, “0”,
SvcState == “Running”, “1”,
“NA”
)
| project localTimestamp, Computer, SvcDisplayName, Status
| summarize arg_max(localTimestamp, *) by Computer, SvcDisplayName
1 and 0 can easily be used as thresholds in a workbook to colour set cells depending on status.
Workbooks can also be pinned to an Azure Dashboard, either all parts of a workbook or just some parts of it.
Monitor a process with Azure Monitor
A common question when working with Azure Monitor is monitoring of Windows services and processes running on Windows servers. In Azure Monitor we can monitor Windows Services and other processes the same way; by looking at process ID as a performance counter.
Even if a process can be monitored by looking at events, it is not always a reliable source. The challenge is that there is no “active monitoring” checking if the process is running at the moment when looking at only events.
Each process writes a number of performance counters. None of these are collected by default in Azure Monitor, but easy to add under Windows Performance Counters.


The following query will show process ID for Notepad. If the Notepad process is not running, there will be no data. The alert rule, if needed, can be configured to generate an alert if zero results was returned during the last X minutes.
Perf
| where (Computer == “LND-DC-001.vnext.local”) and (CounterName == “ID Process”) and (ObjectName == “Process”)
| where InstanceName == “notepad”
| extend localTimestamp = TimeGenerated + 2h
| where TimeGenerated > ago(5m)
| project TimeGenerated , CounterValue, InstanceName
| order by TimeGenerated desc

Disclaimer:
Cloud is a very fast-moving target. It means that by the time you’re reading
this post everything described here could have been changed completely.
Note that this is provided
“AS-IS” with no warranties at all. This is not a production-ready solution for
your production environment, just an idea, and an example.
Operations Manager Admin Integration Pack (v1)
With the System Center Integration Pack for System Center 2012 Operations Manager we can integrate with Operations Manager and automate maintenance mode, alert handling, monitor for alerts, and state changes. Unfortunately there are no activities for handling management packs or management pack objects.
Some time ago I posted a blog post around self-service for Operations Manager (http://contoso.se/blog/?p=2764). That idea was built around creating new objects, like a rule, in XML and then import the management pack. While this works fine in some scenarios, there are other scenarios where it would be nice to have a bit more flexibility and reliability.
Russ Slaten (a true Texans with a lot of guns), a PFE colleague and Operations Manager Jedi from the US and I have built a first version of what we call “Operations Manager Admin Integration Pack” for Orchestrator. The purpose of this integration pack is to enable more self-service and automation scenarios in Operations Manager.
Activities in version 1 are
- Create MP. Creates a new management pack
- Create Performance Collection Rule. Creates a rule that collects performance data
- Create Event Alert Rule. Creates a rule that generates an alert based on event viewer event
- Delete Management Pack. Delete a management pack
- Delete Rule. Deletes an rule based on rule ID
- Export Management Pack. Exports a management pack
- Get Management Pack. Lists management packs from the management group
- Get Rule. List rules. With default settings the activity will list all rules, use the Displayname property to filter the search result
- Import Management Pack. Imports a management pack to the management group
These activities doesn’t use global connections, instead you can specify management server in each activity. This integration pack requires Operations Manager PowerShell snap-in on all Runbook servers.
As I think you have already realized, this integration pack enables a lot more self-service scenarios where non-Operations Manager Engineers can order new objects and handle management packs. Please take it for a spin and let us know what you think!
Example configuration for the “Create Rule Collection Performance” activity
OM Server Name | SCOM-Lit.Litware.com |
MP Name | Custom.Example.Sandbox |
Rule Name | Custom.Example.Sandbox.Rule.Test1 |
Rule Displayname | Sandbox Test Rule 1 |
Rule Description | Sandbox Rule for testing |
Rule Target | ‘Microsoft.Windows.Computer’ |
Object Name | Processor |
Counter Name | % Processor Time |
All Instances | TRUE (used for multi instance counters) |
Instance Name | If AllInstances is false, then fill this in with either a target variable or fixed value |
Interval Seconds | 300 |
DW Only (Data Warehouse) | TRUE (If you only want to write to the data warehouse) |
Is Optimized | TRUE (Use if you’re using the optimized data provider) |
Tolerance | 10 (Percentage or absolute. If value changes more than x then collect, otherwise skip) |
Tolerance Type | Percentage (Percentage or Absolute) |
Maximum Sample Separation | 12 (How many samples can be skipped before forcing collection) |
Example configuration for the “Create Alert Event Rule” activity
Management Pack ID | Custom.Example.Sandbox1 |
Management Server | SCOM-Lit.Litware.com |
Rule ID | Custom.Example.Sandbox.Rule.AlertTest8 |
Rule Description | Sandbox Rule for testing |
Rule Displayname | Sandbox Test Alert Rule 8 |
Rule Target | Microsoft.Windows.Computer |
Computer Name | $Target/Property[Type=”Windows!Microsoft.Windows.Computer”]/PrincipalName$ |
Event Log Name | Operations Manager |
Event ID | 9999 |
Event Source | OpsMgr Scripting Event |
Alert Name | Sandbox Test Alert Rule 8 |
Alert Priority | High = 2Medium = 1Low = 0 |
Alert Severity | Critical = 2Warning = 1Information = 0 |
Example configuration for the Create Management Pack activities
Example configuration for the Delete Management Pack activities
Example configuration for the Delete Rule activities
Example configuration for the Export Management Pack activities
Example configuration for the Get Management Pack activities
Example configuration for the Get Rule activities
Example configuration for the Import Management Pack activities
Big thanks to our colleague Stefan Stranger for Powershell support a very early morning 🙂
Download the IP, OMAdminTasks_20130508-1
Note that this is provided “AS-IS” with no warranties at all. This is not a production ready management pack or solution, just a idea and an example.
Orchestrator dashboard in Operations Manager 2012
When you start utilize Orchestrator to integrate between services and execute workflows you soon realize that you need to get an overview of what Orchestrator is actually doing. With the Orchestrator management pack for Operations Manager 2012 you get a good foundation of monitoring the Orchestrator infrastructure, but not that much about what Orchestrator is really doing. If Orchestrator is integrated with Service Manager most runbooks will run as an activity in a work item in Service Manager and then we can use Service Manager Reports to review what has been executed. In this example I will show you how you can build a dashboard in Operations Manager 2012 to show what is going on in Orchestrator.
With Operations Manager we can run a VB script and return the result as performance data. We can then use reports, performance views or dashboards to look at the performance data. In this example I have created a number of rules that runs VB scripts every 15 minute. Each script query the Orchestrator database for some information and sends the result back as performance data to Operations Manager. Some of the rules could be merge together to one SQL query, but as this is only an example and not complete management pack I did not re-wrote that. In Operations Manager I have created a dashboard to show the data.
Each script have an override controlled parameter, Script Arguments, which input Orchestrator database server and Orchestrator database name to the script. My example rules use a run as profile named “Contoso – Orchestrator – DB read account” to configure which account to use when query the Orchestrator database. With default settings, in this example, each query runs every 15 minutes and ask for data for the last hour.
My example dashboard includes five widgets, each widget show a number of performance instances.
- Queue
- Pending Jobs, show number of runbooks with pending status, meaning they are waiting to start
- Top minutes in queue, show number of minutes top 1 job have been in the queue.
- Runbook Results
- Success, show number of runbooks that have ended with success result
- Warning, show number of runbooks that have ended with success result
- Failed, show number of runbooks that have ended with success result
- Runbook Jobs. This widget show number of times each runbook have run with success result. You can easy see which runbook that most often executed. The names you see is the name of the runbook.
- Orchestrator Server Status, show status of my Orchestrator roles. In this sandbox all roles are on the same server, SCO01.
- Orchestrator Alerts show alerts generated by my Orchestrator machine.
You can download my example MP here, NOT SUPPORTED – Contoso.Orchestrator – v2 . Note that this is provided “AS-IS” with no warranties at all. This is not a production ready management pack or solution for your production environment, just a idea and an example.
As always, big thanks to Patrik for support and good discussions around System Center.
Other examples around scripts in rules, generating performance data, can be found here
Recent Comments