Home » System Center Operations Manager 2007 (Page 7)
Category Archives: System Center Operations Manager 2007
Operations Manager tools
Last week Savision released a free version of Live Maps for Microsoft Operations Manager 2007. Microsoft Operations Manager administrators everywhere can now benefit from the great visualization capabilities of Live Maps v3. The free version is fully functional and allows IT organizations to create three maps of any type. Download here .
…more demos at Savision webpage.
Tool number two is from Mark Wolzak. He has created a cool Maintenance Mode GUI that you can download here.
Logfile Check on Linux
In Operations Manager 2007 R2 we have the possibility to monitor Linux and UNIX machines. There are among with other new features two new management pack templates:
- Unix/Linux LogFile (monitor a logfile for a specified entry)
- Unix/Linux Service (monitor a service with a standalone process)
In this post I will show some ideas how to monitor file size on a linux machine. File size monitoring is not a default feature in R2, not on Windows or on Linux machines. On Windows machines I use a two state monitor and a script, describe in this post.
The first step is to create a script on the Linux side. This script checks how big the file is, and if the file is bigger then 100 it will write a warning to a logfile (scriptlog.log).
#!/bin/sh
find /load.sh -printf ‘%s %p\n’ | while read size name; do
if [ “$size” -gt 100 ]; then
echo $(date) WARNING the file is $size >> scriptlog.log
fi
done
The next step is to get Linux to run it automatically, we can do that with cron. Cron is a time-based job scheduler in Linux. Cron is driven by a crontab, a configuration file that specifies what to run and when. My crontab looks like
* * * * * / root/script.sh
It is very simple, I run the script every minute. Configure it with
crontab -e
The next step is to configure a management pack template for the Linux logfile to trigger on WARNING in the scriptlog.log file, configure it to trigger on WARNING. It is also important to keep track of the cron process, fortunately that is monitored with the default SUSE management pack.
You are now monitoring if there is a problem with the file size. The next step is to get the size of the file as performance data in Operations Manager. This can also be done with a script and a collection rule. Create a Collection Rule (Probe Based/Script (Performance)) and run the following script with the rule:
Set objShell = WScript.CreateObject(“WScript.Shell”)
Set objExecObject = objShell.Exec(“cmd /c C:\plink.exe user@192.168.0.71 -pw password stat -c%s / root/script.sh”)
Do While Not objExecObject.StdOut.AtEndOfStream
strText = objExecObject.StdOut.ReadLine()Dim oAPI,oBAG
Set oAPI = CreateObject(“MOM.ScriptAPI”)
Set oBag = oAPI.CreatePropertyBag()
Call oBag.AddValue(“PerfValue”, 10)
Call oAPI.Return(oBag)
Loop
This script runs plink.exe. Plink (PuTTY Link) is a command-line connection tool. We will use that to execute commands on the Linux side. The script will then collect the result of the command, the file size, and send it back as a performance data value (PerfValue). I have the same kind of script for Windows here.
The next thing we might want to check is if the file exists. We can do that with a two state monitor. In this post you can read how to configure a two state monitor with a script. Use the script below in your monitor
Dim oAPI, oBag
Set oAPI = CreateObject(“MOM.ScriptAPI”)
Set oBag = oAPI.CreatePropertyBag()
Set objShell = WScript.CreateObject(“WScript.Shell”)
Set objExecObject = objShell.Exec(“cmd /c C:\plink.exe user@192.168.0.71 -pw password [ -f / root/thefile.log ] && echo ok || echo bad”)
Do While Not objExecObject.StdOut.AtEndOfStream
strValue = objExecObject.StdOut.ReadLine()If instr(strValue, “ok”) Then
Call oBag.AddValue(“Status”,”OK”)
Call oAPI.Return(oBag)
End IfIf instr(strValue, “bad”) Then
Call oBag.AddValue(“Status”,”Bad”)
Call oAPI.Return(oBag)
End IfLoop
That script checks if thefile.log exists in the root directory. If it does it will send back “ok” else “bad”.
Summary: We use a couple of different scripts and forwards the result to Ops Mgr. One script echo to a logfile that we then pickup with default a Logfile management pack template. Another script is run from inside a two state monitor with the plink.exe tool. In this post I wanted to give you some ideas to get info into Operations Manager 2007 from your Linux machines.
Collecting Events
I have received a number of questions lately regarding event collection. In this post I will show you how you can collect events and review them both in reports and in the console.
Start by creating a new rule, authoring/rules/create a rule/collection rule/NT event log. The collection rule will only collect, not generate any alerts. In my example I use Windows Server 2008 Computer as target. I will create the rule disable as default. Then override and enable it for a group including a couple of windows server 2008 computer objects.
When you have created the new rule you can create a new event view in the monitoring workspace. Remember to create the new view in the same MP as the collection rule is stored.
The next step is to create a report. You can use the generic Custom Event report to create a linked report showing all the events. Run the Custom Event report and select a couple of windows server 2008 computers as objects, filter the report for example in my example Event ID equals 666. Note that you have to check its checkbox for every report field you want to include. If you check any checkboxes you will get a empty report.
If you don’t like the default event report you can author a new in Visual Studio. You can read my guide about that here and use the following query when building the data set in Visual Studio
SELECT
vEvent.DateTime,
vEventPublisher.EventPublisherName as ‘EventSource’,
vEventLoggingComputer.ComputerName as ‘Computer’,
vEventLevel.EventLevelTitle as ‘Type’,
vEvent.EventDisplayNumber as ‘EventID’,
vEventChannel.EventChannelTitle,
vEventUserName.UserName,
vEventDetail.RenderedDescription as ‘EventDescription’
FROM
Event.vEvent LEFT OUTER JOIN
vEventUserName ON vEvent.UserNameRowId =
vEventUserName.EventUserNameRowId LEFT OUTER JOIN
vEventCategory ON vEvent.EventCategoryRowId =
vEventCategory.EventCategoryRowId LEFT OUTER JOIN
vEventPublisher ON vEvent.EventPublisherRowId =
vEventPublisher.EventPublisherRowId LEFT OUTER JOIN
vEventLoggingComputer ON vEvent.LoggingComputerRowId =
vEventLoggingComputer.EventLoggingComputerRowId LEFT OUTER JOIN
vEventLevel ON vEvent.EventLevelId = vEventLevel.EventLevelId LEFT OUTER JOIN
vEventChannel ON vEvent.EventChannelRowId =
vEventChannel.EventChannelRowId LEFT OUTER JOIN
Event.vEventDetail ON vEvent.EventOriginId = vEventDetail.EventOriginId
WHERE vEventLevel.EventLevelTitle = ‘Error’
ORDER BY vEvent.DateTime, vEventLoggingComputer.ComputerName
To generate test events you can use eventcreate, which is built-in into Windows 2003 and 2008. For example run “Eventcreate /L Application /D “test†/T ERROR /ID 666” .To generate an event in the application log with event ID 666 and “test†as event description.
System Center Training
I would like to inform you about two great Operations Manager courses that will be delivered in Sweden this spring.
Microsoft System Center Suite Bootcamp
The SMSE Bootcamp is a dynamic, new 3-day training course from the System Center Technical Readiness team which brings together the core products from Microsoft’s System Center Suite in a series of “Real World”, data center management scenarios. The course has been specifically designed for Technical Consultants to give them the skills and understanding they need to successfully implement the System Center Suite for customers and end users. The course consists of a series of instructor led, hands on labs (HOL), which guide the student through the steps required to both successfully configure and use System Center Operations Manager 2007 (OpsMgr), System Center Configuration Manager 2007 (ConfigMgr), System Center Data Protection Manager 2007 (DPM) and System Center Virtual Machine Manager (SCVMM) in conjunction with core data center applications such as Microsoft SharePoint Server 2007 and Exchange Server 2007 running on the Microsoft Hyper-V platform.
For more information click here (info in Swedish)
Master Class: Management Pack Authoring
This is the course for you who wants to learn how to author a management pack. It is a 3-day course including
- Management Pack architecture
- Management Pack tuning
- Management Pack advanced features
- Sealing a managmenet pack
- Author reports for all databases in Ops Mgr 2007
- The Authoring Console
- Linked Reports
- Data Warehouse architecture
- Author performance, events and security reports
- Author custom reporting with Visual Studio
- Connectors
- the universal connector
For more information click here
Enable ACS forwarding for a group
I have seen a number of scripts on the Internet to enable ACS forwarding for multiple machines. Unfortunately they are not always working or they have to many variables to adjust. But, there are two scripts on the Operations Manager CD that you can use, one for enable and one for disable ACS forwarders.
- DisableForwarding.ps1
- EnableForwarding.ps1
If you have a custom group, including a number of machines, for which you want to enable ACS forwarding, you can follow the steps below
- In the Operations Console, navigate to the Monitoring workspace, then click the Discovered Inventory view
- Click Change Target Type, in the action pane
- In the Select a Target Type window, select View all target, then select Computer Group and click OK
- Right-click a group and select Open and Command Shell from the context menu
- Input C:\EnableForwarding.ps1 <FQDN ACS Collector> and press Enter to run the script (ex C:\EnableForwarding.ps1 ms01.contoso.local)
- In the popup window, input Operations Manager administrator credentials
- Input cd ..
- Input get-monitoringclass –name “Microsoft.SystemCenter.ACS.Forwarder†| get-monitoringobject | ft pathname
- Verify that all machines in your group is in the list of ACS forwarders
- Input exit to close command shell
If you have your groups in a unsealed management pack you might need to seal that management pack first. There are guides about that here and here.
Author Custom Reports in Ops Mgr 2007
Operations Manager 2007 collects large amounts of data from your environment. By using the Reporting feature, you can create reports based on this data that provide additional information about the health of your environment. Operations Manager can have four types of reports
- Published reports, automatically available in the console after ops mgr reporting installation
- Linked reports, based on existing reports
- Custom reports, authored from queries that you build in Visual Studio
- Report solution, defined with Visual Studio and are available in a management packÂ
I always try to solve the new report request with a linked report if possible. The next step is to use the built-in SQL Report Builder, which you find under the Reporting workspace. But when using that you will need a report model. A report model is a description of an underlying database that is used for building reports in Report Builder 1.0. For example in this post I use the ACS db (Audit Collection) report model to build custom ACS reports. In Ops Mgr there are report models for ACS, performance and event reports. But there are scenarios that you can´t solve with linked reports or SQL Report Builder, and then Visual Studio is a great tool to build reports with.
The first thing you need to do in Visual Studio when you start a new reporting project is to add a data source. A data source represents a connection to an external data source. The second thing is to add a report and a data set. A data set retrieves rows of data from a data source based on an SQL query. You can for example the query string below when working with performance reports. As you can see it looks for performance counters including Available MBytes in the name.
SELECT
vManagedEntityTypeImage.Image,
vPerfHourly.DateTime,
vPerfHourly.SampleCount,
vPerfHourly.AverageValue,
vPerfHourly.StandardDeviation,
vPerfHourly.MaxValue,
vManagedEntity.FullName,
vManagedEntity.Path,
vManagedEntity.Name,
vManagedEntity.DisplayName,
vManagedEntity.ManagedEntityDefaultName,
vPerformanceRuleInstance.InstanceName,
vPerformanceRule.ObjectName,
vPerformanceRule.CounterName
FROM
Perf.vPerfHourly INNER JOIN
vManagedEntity ON Perf.vPerfHourly.ManagedEntityRowId =
vManagedEntity.ManagedEntityRowId INNER JOIN
vManagedEntityType ON vManagedEntity.ManagedEntityTypeRowId =
vManagedEntityType.ManagedEntityTypeRowId LEFT OUTER JOIN
vManagedEntityTypeImage ON vManagedEntityType.ManagedEntityTypeRowId =
vManagedEntityTypeImage.ManagedEntityTypeRowId INNER JOIN
vPerformanceRuleInstance ON
vPerformanceRuleInstance.PerformanceRuleInstanceRowId =
Perf.vPerfHourly.PerformanceRuleInstanceRowId INNER JOIN
vPerformanceRule ON vPerformanceRuleInstance.RuleRowId =
vPerformanceRule.RuleRowId
WHERE
(vPerformanceRule.CounterName LIKE N’%Available MBytes%’)
ORDER BY vPerfHourly.DateTime
 There are a number of good query strings at this TechNet page. The next thing to do is to start designing your report. You can drag and drop report objects from the toolbox. Report items add data, structure, and formatting to a report and come in two varieties; data regions and independent items. A data region renders data from an underlying data set.Independent report items are items that are not associated with a data set, for example a line or a rectangle. If we continue with the available Mbytes example a chart would be a good start. By drag and dropping a chart from the toolbox and then fields from the dataset you can easily create a chart. But it is not that easy-to-read.
To make the report more precise we could start by adding a drop down menu to select which machine to look at performance data from. To do that we first need to create a new dataset. You can use the same SQL query as before, but in this dataset only
select DISTINCT vManagedEntity.Path
, as we only want machines in the drop down menu. Then go to the Report menu and select to add a report parameter. Select to create a report parameter with a query based value and then select your new dataset and the path field. Then you need to add this parameter to your first dataset, as you want to only see performance data for the selected machine. To do that add vManagedEntity.Path and your parameter to the SQL query.
(vPerformanceRule.CounterName LIKE N’%Available MBytes%’) AND (vManagedEntity.Path = @Server)
If we now preview the report there is a drop down menu with all machines where the chart only show data shows data related to the selected machine.
The next think would be to change the time range. You can do that with report parameters and then add them to your SQL query. If you want to add a dynamic time range, for example NOW minus 7 days you can use the DateAdd command in your SQL query.
If you then right-click the chart there are a number of settings, for example change the scale, change chart type, enable 3-D and add a title to the chart. Other things that you might want to add is a header and some text to your report, then a table with details about the data in the chart. You can drag and drop both text box and matrix from the toolbox.
Â
When you are satisfied with your report you can right-click the report project (top left side of visual studio) and deploy the report to your reporting server.
Microsoft.MOM.UI.Console.exe /viewname
I have seen a number of questions about open the Ops Mgr console with a definite view. If you want to do that you can run the Microsoft.MOM.UI.Console.exe command with the /viewname switch. To get the viewname you can follow some tips in this post or you can look in the management pack XML code.
If you for example export and look in the Microsoft.Windows.Server.Library MP you will see a line starting with <View ID=”Microsoft.Windows.Server.Computer.AlertsView”. To start the console with that view run:
Microsoft.MOM.UI.Console.exe /viewname:Microsoft.Windows.Server.Computer.AlertsView
Query a database with a monitor
I have seen a number of questions about how to run queries against a database and verify the answer. One way is to run a script inside a monitor. In this blog I wrote how to setup a script in a two state monitor. The script in this post will count number of fields, if there are more then five, status of the monitor will be changed. Note that counting starts at 0 with fields collection.
 Const adOpenStatic = 3
Const adLockOptimistic = 3
Set oAPI = CreateObject(“MOM.ScriptAPI”)
Set oBag = oAPI.CreatePropertyBag()
Set objConnection = CreateObject(“ADODB.Connection”)
Set objRecordSet = CreateObject(“ADODB.Recordset”)
objConnection.Open _
“Provider=SQLOLEDB;Data Source=R2B1;” & _
“Trusted_Connection=Yes;Initial Catalog=ContosoConfiguration;” & _
“User ID=CORP\Administrator;Password=;”
objRecordSet.Open “SELECT * FROM roles”, _
objConnection, adOpenStatic, adLockOptimistic
varNo = objRecordSet.Record.Count
If varNO > 5 Then
Call oBag.AddValue(“SQL_Status”,”Error”)
Else
Call oBag.AddValue(“SQL_Status”,”Ok”)
End If
Call oAPI.Return(oBag)
Alert Level and Alert Severity
In MOM 2005 we had seven (10,20,30,40,50,60,70) alert severities. In Ops Mgr 2007 we only have three, critical, information and warning. If you convert a management pack or download a converted management pack you can see in the XML code that for example a rule is genering an alert with AlertLevel 50. If you want to create new views and notification subscriptions for that alert you need to know how alert level is translated to alert severity in Ops Mgr 2007. I did a test with a rule from a converted Biz Talk management pack. I trigged the same rule, but between each test I changed alert level.
Alert Level 70Â Critical Alert Severity
Alert Level 60Â Critical Alert Severity
Alert Level 50Â Critical Alert Severity
Alert Level 40Â Critical Alert Severity
Alert Level 30Â Warning Alert Severity
Alert Level 20Â Information Alert Severity
Alert Level 10Â Information Alert Severity
All alert had low priority.
Review a Generic Text Log with Ops Mgr Reporting
I received a question about reviewing log files with Operations Manager reporting. There are rule to collect text log, but not out of the box report to look at the result. In this scenario you don’t want an alert based on the content of the log file, you only want to collect it and review it.
Start by creating a Collection Rule/Generic CSV Text Log rule. Configure it to collect everything from the file on the expression page, in my example (for a CSV log file)
Parameter Name: Params/Param[1]
Operator: Matches Wildcard
Value: *
Directory: C:\LogFiles\
Pattern: logfile.log
Separator: ,
To review the data we will use the Custom Event report. Unfortunately there is a known issue regarding this, the report does not display data where filter rules includes a parameter between 1 and 20, such as Parameter 1. To fix this you have to install hotfix 954823.
To review the log file with the Custom Event report choose to filter object equals FQDN of the machine with the log file, object type equals Windows Computer, Category equals 3 and Channel equal GenericCSVLog.
Recent Comments