Home » System Center Service Manager (Page 2)

Category Archives: System Center Service Manager

Show Status of the last Review Activity (with a pinch of Orchestrator magic)

A friend asked me if it is possible to show the status of a related review activity in the “All Open Service Requests” view? After a while we realized that a requirements was to show the status of the last review activities, so if there were multiple review activities we just wanted to show the last one. The reason why they needed this view was that customer called service desk and asked about service requests. With a view like this the operator could easy see if all the manual review activities was completed, if the service request was still waiting on more approvals or if someone was building/working on the requested service.

We started by extending the Service Request class in Service Manager with two new properties, one to show the last review activity status and one to show when we last updated that status.

20130616_SRStatus03We use a runbook to update these properties

20130616_SRStatus02

Comprehensive description of each activity in the runbook

  1. Monitor Date/Time. The runbook runs every 30 minutes
  2. Query Database. Truncates a database that is used to store data temporary. The runbook use a database table to store review activity information temporary.
    20130616_SRStatus04
  3. Get Object. Gets all open Service Requests, from the extended Service Request class.
  4. Get Relationship. Get related Review Activities for each open Service Request
  5. Update Object. If there are no related review activities it will update the “LastStatus” property with “No Review Activities”
  6. Get Object. If there are related review activities it will get the review activity and write review activitiy and Service Request information to the database
  7. Junction is used to merge all threads together to one
  8. Format the current time/date to a format that works with Service Manager
  9. Query Database. Query the database to get last review activity for each Service Request and status of it
  10. Update Object. Updates each Service Request that is in the database with review activity status

The result in the Service Manager console can look like this

20130616_SRStatus01

You can download my example runbook and Service Manager class extension here, 20130616_SRStatus

Note that this is provided “AS-IS” with no warranties at all. This is not a production ready management pack or solution, just a idea and an example.

Tracking Logon and Logoff Activities in Service Manager

Last week I received a e-mail around tracking logon and logoff activities Service Manager with Operations Manager. It can be solved with a event collection rule and the default custom event report.

  1. In Operations Manager console, navigate to the Authoring work space and Rules
  2. Create a new rule of type Collection Rules/Event Based/NT Event Log
  3. Select a suitable management pack or create a new management pack, Next
  4. Input a rule name, for example Contoso – Service Manager – Logon and Logoff
  5. Select a rule target. If you have the Service Manager management pack imported you can use the SCSM 2012 Management Server class as target. Logon and Logoff events will be generated on your Service Manager management servers. Next
  6. Event Log Type, select or input Operations Manager, Next
  7. Build Event Expression like in the figure below, then save the new rule. In the example I exclude all events about my service accounts, all service accounts starts with svc.  Event ID 26328 is logon and event id 26329 is logoff.

20130309_SCSM01

 

Once the rule is created and deployed to your Service Manager management servers they will start report back events as soon as someone logon or logoff Service Manager. You can create a event view in the same management pack and configure the event view to show events generated by your new rule.

 

20130309_SCSM02

To show this data in a report you can use the default Reporting, Microsoft Generic Report Library, Custom Event report. The following figure show configuration of the Custom Event report

20130309_SCSM03

 

and the result of the report

20130309_SCSM04

 

 

If you want to look at the events in the Operations Manager data warehouse database you can use the following SQL query

select * from Event.vEvent ev
inner join Event.vEventDetail evd on ev.eventoriginid = evd.eventoriginid
inner join Event.vEventParameter evp on ev.eventoriginid = evp.eventoriginid
where eventdisplaynumber = ‘26329’ OR eventdisplaynumber = ‘26328’

If you want to build your own report you can use SQL Report Builder. I have a example of that here.

Self-service data recovery with Data Protection Manager, Service Manager and Orchestrator

In my sandbox I test a lot of solutions, management packs, integration packs and ideas. They don’t always work out the way expected 🙂 The result is that I often need to restore a database from backup. I use Data Protection Manager to protect my databases, Service Manager to order the restore and Orchestrator as the “doer”. In this blog post I want to share a example how to make restore of database a bit easier.

In the Service Manager self-service portal I have a request offering named Restore Database.

  • Restore to Original Location. If this checkbox is enabled the database will be restored to original instance. In most cases I restore to a network folder. Network Folder is used in this integration pack in the same way that it is used in the Data Protection Manager user interface. Choosing Network Folder recovers to a local path on a server that have the DPM agent installed. I have configured my runbook (also included in this blog post) to always recover to C:\RESTORE on the target machine
  • Target server. If I select to restore to a network folder, default, I input a server name. For example if I want to restore the Orchestrator database to my Orchestrator database server I input SCO12SP1-SQL01 in the Target Server text box. The database backup will then be restored to C:\RESTORE in the SCO12SP1-SQL01 server.
  • Recovery Point to Restore. In this query based list I can select which DPM recovery point to restore. I have a runbook (also included in this blog post) that create CI of each recovery point.

20130102_DPM_SelfService05

Service Manager invokes the “1.2 Restore” runbook in Orchestrator. The runbook is divided into two tracks depending if restoring to a network folder or to original location. Both Data Source ID and Recovery Source ID, used to recover the SQL database, is stored on the Backup CI in Service Manager so we dont need to get them from DPM within the runbook. In general the runbook restore the database and updates the service request.

20130102_DPM_SelfService02Runbook “1.1 Create Backup CIs” is the second runbook in this example. It is used to create backup CI objects in Service Manager. The backup class is a custom class that I have created with the Service Manager authoring tool. The runbook runs every hour and creates new/updates/deletes CIs of the backup class.

  • Every hour.  Invokes the runbook every hour
  • Get Existing Backup. Gets all objects of the backup class in Service Manager. If there are any objects the “Set Verified to FALSE” activity change the verified property of all the backup CIs to FALSE.
  • Junction. Used to merge possible multiple threads to one
  • Get Data Source for System Center DBs. In my DPM server I have a protection group named “System Center Databases”, this activity gets all data sources for that protection group
  • Get All Recovery Point. This activity gets all recovery points for the data sources returned by the “Get Data Sources for System Center DBs” activity
  • Check if Backup Exist. This activity checks in Service Manager if there is a backup CI, with Active status, for the current BackupID. BackupID is a property of the backup class that I use to give all recovery points a unique ID, the backup ID contains of <Protection Group Name>.<Production Server Name>.<Recovery Time Point in Time>.<Data source Name>
  • If a backup CI object already exist the runbook change the verified property of the backup CI object to TRUE
  • If no backup CI object exist a new backup CI is created and a relationship to the server is created
  • Junction. Used to merge possible multiple threads to one
  • Get Non Verified Backups. This activity gets all backup CIs that has not been verified (verified property equals FALSE) and deletes them with the “Delete Backup” activity

 

20130102_DPM_SelfService01

 

Backup CIs listed in the Service Manager Console

20130102_DPM_SelfService03

Backup CI

20130102_DPM_SelfService04Relationship between windows server and backup CI

20130102_DPM_SelfService06

 

When the runbook is done the service request is updated with some information, which can be read from the Service Manager self-service portal. As you can see the database has been restored to C:\RESTORE on the SCO12SP1SQL-01 server. A very Quick and easy way to roll back a database.

20130102_DPM_SelfService07

 

You can download my example files here, 20130103_DPM. Note that this is provided “AS-IS” with no warranties at all. This is not a production ready management pack or solution for your production environment, just a idea and an example.

Manage new monitoring by self-service (light MP authoring with a pinch of Orchestrator magic)

A common scenario I often see is that everyone in the IT organisation knows that Operations Manager can monitor everything and fulfill all requirements, but it is to complicated for different expert/administration teams to do anything in Operations Manager. For example if the Exchange team wants to monitor a event they need to ask the Operations Manager team to create the rule. Of course the Operations Manager team don’t have time to do that the same day, instead there is a delay and once the rule is created the Exchange team have already solve it in some other way. The result is that Operations Manager is not used as much as it should be used.

In previous posts I showed how to handle overwrites and groups in Operations Manager with self-service in Service Manager and a bit Orchestrator. In this post I want to share a idea how to handle new monitoring, for example creating new rules from the Service Manager self-service portal.

My example starts with a service request in the Service Manager portal. A engineer goes in a request a new Windows event rule in Operations Manager. The engineer fills in event ID, rule name, which Windows Log, service/system and also alert name. A service request is created and in the service request there is a runbook activity.

The runbook activity trigger a “master runbook”. The runbook first invoke a runbook that will find a suitable management pack, then invoke a runbook to create a new monitor or rule (I have only included the rule part so far), then it invokes a runbook to import the management pack into Operations Manager and finally it invokes a runbook to update the service request.

The 60.3 Find MP runbook will find and return the management pack to use. It use the service parameter from the service request to select management pack. All management packs that are in products are stored in a “production” folder. The “Check if MP exists” activity checks if there is a management pack in that folder for the selected service. If there is, it makes a copy of it to a “archive” folder and returns the file path. If there is not a management pack it will write a new management pack file and return the path of that file. The “Write new MP file” activity will write all the needed XML code to a new XML file, it includes a number of input parameters.

 

The 60.2 Create Rule runbook will first translate targeting between the service parameter and the target parameter needed in the management pack. In my example I only have one target there, that is Windows 2008 Computer. The runbook then finds the <Rules>,<DisplayStrings> and the <StringResources> sections of the management pack and then adds the new rule. We use “Find” to know where in the management pack, on which line, to insert the new configuration. Each “Add Rule -” activity use input parameters when writing the new rule.

 

The 60.5 Import MP runs a Powershell script to import the management pack into Operations Manager. The last runbook, 60.4 Update Service Request, will update the service request with some information about the new management pack.

This example shows a way to use the self-service portal in Service Manager to order a new event rule in Operations Manager. Orchestrator builds the new rule in a management pack and import it into Operations Manager. A engineer that don’t know much about Operations Manager can still “author” a new rule and import it into Operations Manager. You could include approval step in the process and you can also include a check on the Orchestrator side to make sure the management pack and the new rule is according to best practices.

You can download my example runbooks here, 60 Create OM Rule , please note that this is provided “as is” with no warranties at all. This is not a production ready management pack or solution for your production environment, just a idea and an example.

vNext of this example could include version handling in each MP, should be easy to build with a couple of counters. Also information about the service request requesting the new management pack version could be included in the management pack description, shown in the Operations Manager console.

Execute a service request at a later date

I received a questions a couple of days ago how to delay a runbook? The scenario was that someone submit a service request in Service Manager which includes a couple of runbook activities, but these runbook activities should not run until two days later. As we don’t want runbooks to be hanging, looping or paused for two days we can’t simple add a “wait two days activity” in a runbook. We also want to see in the service request in Service Manager that we are waiting for Orchestrator, and the service request should not be marked as completed until the runbook activities have run.

There are a number of ways to solve this. In this blog post I will show one where we use multiple runbooks and an external database to store data temporary. The scenario in this example is that you order a server reboot from the self-service portal in Service Manager. In the portal you pick a date and also set a checkbox if the server is an IIS server. Server reboot is only allowed after office hours, in this example around 23:00 every evening. The process is

  1. You submit a service request from the self-service portal in Service Manager, saying that a server need to be rebooted at a specific day. You also set if this service is a IIS  or not
  2. A runbook is triggered and write the service request data into a external database
  3. The service request moves to next activity which is a manual activity
  4. Another runbook is running every day at 2300 and checks the external database for reboot jobs that should be executed at that day. If there is a job the runbook reads all the service request details from the external database, restarts the machine and updates the manual activity. If the server is a IIS server some extra steps is executed during the restart
The user browse to the self-service portal and submits the service request. As we have configure the date input field as date type in the service request we get a nice date picker by default.
The first writes the service request data to a external SQL database. Get Relationship and Get Objects gets the service request ID from the runbook activity instance GUID which is provided by the Initialize Data activity. The next image show an example of the data stored in the external database, in this example the OrchestratorTool database.
The second runbook is a bit more complicated
  • Monitor Date/Time, trigger the runbook every day at 2300
  • Query Database, ask the external database if there are any reboot jobs to execute (SELECT * FROM Reboot WHERE DATE <= GETUTCDATE())
  • If there are any rows returned the runbook moves to Restart System. Restart System reboots the target machine, it also sends the service request id to the target machine shutdown tracker.
  • Run .Net Script, waits five minutes (Start-Sleep -s 300)
  • Get Computer/IP Status, tries to ping the machine. If percentage of packets received is 100 the runbook moves to Query Database (2) else it generates an alert in Operations Manager
  • Query Database (2) query the external database if the target machine is a IIS or not. As we use that on the link as condition we need to do the query again. A link can only have a condition based on the previous activity.
  • If the target machine is a IIS we check the web server service, if the service is not equals “Service Running” we generate an alert in Operations Manager
  • Of the machine is not a IIS or if the web server service is running, we move to Get SR
  • Get SR picks up the service request
  • Get Relationship gets all related manual activities, in this example we only have one, named “Waiting for Orchestrator to reboot”
  • Update Activity, set the “Waiting for Orchestrator to reboot” activity to completed
  • The last Query Database activity deletes the reboot job from the external database
The image below show the event written to the System log on the target machine when it is about to be restarted.
You could use a query based list in the self-service portal to let the user pick a server to reboot based on for example ownership, location or service. In this post you can see a example of how to build a query based list in the self-service portal.

Don´t forget to add a couple of activities to handle Operations Manager maintenance mode in your reboot runbooks too 🙂

As a alternative solution you could also create a new custom activity instead of using the default manual activity (example of how to build custom activity can be found here), then have only one runbook that checks every day at 2300 for activities of that class in “ongoing state”. The runbook can read all the settings from the custom activity, for example which server to reboot and when. If “when” is today the runbook reboots the server and marks the activity as completed.

Note that this is provided “AS-IS” with no warranties at all. This is not a production ready management pack or solution for your production environment, just a idea and an example.

Building a change calendar with Orchestrator and Service Manager

A change calendar keeps everyone informed about when changes will be performed and also gives a overview of planned changes in the environment. We handle change requests as work items in Service Manager but there is no change calender out-of-the-box in Service Manager. In this blog post I will show how to build a change calender with a shared calender in Exchange, Service Manager and Orchestrator. The idea is what when a change request is created in Service Manager a appointment is created in a shared Exchange calender. Then all engineers can access that calender and see all planned changes in one view.

The first thing to create is the runbook. The runbook monitor Service Manager for new change request, when there is one, the runbook trigger and create the appointment in Exchange. The runbook is quite small as you can see in the image below.

I use the Exchange Users Integration Pack to create the appointment, download it here. The “Monitor New Change Request” activity is configure to trigger on all new objects of Change Request. The “Create Appointment” activity and the Exchange connector is configured according to the images below.

Below is a image of the result in the shared calender, that every engineer can see from Outlook

Done! It is a very quick and easy solution that will bring a lot of value for many organisations. But what if you want to add something manually? Then you can of course create a appointment manual in Outlook and invite the shared calender, but you could also build a service request and use the self service portal in Service Manager to add that. Include a runbook activity in the service request that create the appointment.

What if you need to delete or update a appointment? The Exchange User integration pack will publish a ID for the item you create, in this case a appointment. Write that ID back to the service request. Then you can create a runbook that monitor change requests for changes, for example status changed to cancelled, and trigger a runbook that delete or update the change request. If you then have the Exchange appointment ID stored on the change request in Service Manager it is easy to pick it up and update/delete the correct appointment, instead of trying to find the correct appointment based on title or start date. In the runbook below I have added a Update Object activity that writes the Exchange ID to the change request.

 

 

Pass information between runbook activity

In Service Manager 2012 we can use runbooks from Orchestrator as activity in for example a service request template. This brings a lot of cool possibilities for automation in your data center. I have blogged a number of examples around this that you can find here on the blog. But in most of the examples I use only one runbook activity, and if I need to use multiple runbooks they are all started from one “master” runbook, not as individual activities in the service request template.  I came across a scenario where we needed to use multiple runbook activities in the same service request template and we needed to pass information between the runbook activities. One benefit with splitting multiple runbooks into multiple runbook activities is that it is easier to track the result in Service Manager, when each step in a complex workflow is its own runbook activity. This also gives the benefit that a Service Manager operator can re-start/re-run a step if needed.

In this blog I will show how you can use two runbook activities in a service request and pass information from runbook activity one to runbook activity two. We will also see how the result from both runbook activities can be published in the service request and self-service portal. In my demo example the first runbook generates an “account name” and the second runbook generates a “phone number”.

Each instance of a runbook activity have a number of “generic/blank” fields that we will use to store the result, they are for example named TEXT1 to TEXT10. Why not store the result in the Service Request? Even if the service request have some fields I could use I don’t want to do that as it would be difficult to remember what result I store in which property. An alternative would be to extend the service request class, but that would result that all kind of service requests would get those new properties, as they use the same service request class.

The first runbook, in this example named 12.1, has one input parameter which is the runbook activity ID. The runbook then generates an account (random 4 characters) and returns account name back to Service Manager.

In Service Manager I have configured that OUT from this runbook should be written to the TEXT1 field of the runbook activity instance. This is done in the service request template, on the runbook activity.

The second runbook, in this example named 12.2, have one input parameter which is the runbook activity ID. The runbook then generates a phone number (random numbers). Then it gets a bit more complicated 🙂

  • Get Related Service Request. This activity use the input parameter, the runbook activity instance ID, to get the related Service Request.
  • Get Service Request. This activity gets the Service Request that the previous activity found
  • Get Related Runbook Activities. This activity finds all related runbook activities to the service request. We need to do this as we need to find the first runbook activity and read the TEXT1 field.
  • Get Runbook Activities. The “Get Related Runbook Activities” will return all related runbook activities, in this example there are two runbook activities in the service request. The link after the “Get Runbook Activities” activity is configured with a filter to only continue with the runbook activity that that a specific title. This title is the title you configure in the service request template.
  • Update Imp Result. This activity update the service request with both a implementation result and an new description. Implementation result and description are both visible in the Service Manager console. The description field is also visible in the self-service portal.

 

 

Different Service Level Objectives (SLO) for different departments

The following post show how to configure different Service Level Objectives (SLO) for the DEV and HR departments. The scenario is that the two departments have different SLO for the time between incident created and first response. That could also be two different customers or companies.

  1. Queues are used to group similar work items that meet specified criteria such as all incidents that are classified by analysts as E-mail incidents. Queues membership rules are dynamic and are periodically recalculated to ensure that the queue membership list is current. For this example we need two queues, one for incidents where the affected user is in the HR department and one for incidents where the affected user is in the DEV department. Navigate to Library/Queues to create these
    1. Queue name: Contoso – Queue – User dep eq HR
    2. Work item type: Incident (typical)
    3. Criteria: Affected User [User] Department contains HR
  2. We need to configure a calender to control when the SLO should be active. In the Service Manager console navigate to Administration/Service Level Management/Calendar. In my example I have configured the calender to be always on, all days are checked and start and end time is 12:00:01 AM to 11:59:59 PM.
  3. We need a Metric to configure what time we want to measure, in this scenario the time between the incident was creates to the first response. In the Service Manager console navigate to Administration/Service Level Management/Metric
  4. Last thing to configure is Service Level Objectives, in the Service Manager console navigate to Administration/Service Level Management/Service Level Objectives. Configure the Service Level Objective to use the calendar and metric created before. As we will use different target and warning thresholds between the two departments we need to create two Service Level Objectives, one for the HR department and one for the DEV department.

 

When a new incident is created with a affected user from either the DEV or the HR department it will be included in the HR queue or the DEV queue. If you want to verify that you can use the following script. Thanks to Anton Gritsenko who wrote this script. When the incident is in the queue the SLO will be applied too. It can take a couple of minutes, but you should then see the SLO on the Service Level tab of a incident.

Reporting have changed a lot with Service Manager 2012. We now have cubes to analyze the data and we use Excel to easy drag and drop fields to build table.  You can use the Service Manager WorkItems Cube to analyze this data in Excel. If you for example want to see SLO information related to each incident, grouped by department, configure Excel like the image below. Note that it can take some time, hours, before all data has been transferred to the data warehouse and that the Cube has been updated. As you can see there is one row above DEV, that one list all incidents where the affected user don’t have a department configured.

 

 

Handle disk IOPS when deploying/deleting or update a virtual machine

A couple of weeks ago I needed to build a solution where disk IOPS are allocated when a new virtual machine is deployed. The scenario was that virtualization administrators wanted to make sure no disks was over allocated looking at disk performance. When Virtual Machine Manager deploy a new virtual machine it doesn’t look at what kind of virtual machines that are already deployed at the disk, only at the current disk load. In worst case this could result in very poor disk performance if all virtual machines start working heavily with the disk at the same time. In the virtual environment we had three virtual machine templates, small, medium and large. There was a estimate of disk IOPS required by each template. Virtualization administrators needed to make sure that not more than one virtual machine based on the large template was deployed to a disk and that new virtual machines was deployed to the disk with most free IOPS. Also make sure that no disk was over allocated when looking at disk IOPS.

In Service Manager disks will show up as CI if you have a connect to Operations Manager. But default properties of the disk class was not enough, we needed to store information about disk IOPS and didn’t want to affect anything else, so we created a new configure item class, that would only be used for this purpose. The new class was created with Service Manager authoring tool, example here.

We could now create CI objects for each disk that the virtual environment was using. Disk CI include max, allocated and free IOPS. We then published an offering in the Service Manager self-service to request new virtual machines. We used a runbook to deploy the new virtual machine. As you can see in the runbook below we have a main runbook building the virtual machines, then we invoke another runbook to figure out which disk to use. The runbook also write a register value to the new machine with information of the template used. Operations Manager will later discover this information and start monitor the new virtual machine based on the virtual machine template. Virtual machines are monitored with different thresholds and in different ways based on which template that was used during deployment.

The “Get disk” figure out which disk to use. It use a temp database to store disk information in, so the first step in the runbook is to clean this database to make sure it runs with fresh data. It then adds data to the temp database about each virtual machine template, and query Service Manager to find all disks that have enough free IOPS based on the virtual machine template. After the junction activity the runbook runs a SQL query to find out which disk to use, if we are about to deploy a new virtual machine based on the large template we also check if there is a free disk with enough IOPS that don’t already host a large virtual machine.   In the end the runbook updates the disk CI in Service Manager, allocate the disk IOPS,then returns which disk to use to the “main” runbook that will continue to build the virtual machine.

We also created a runbook to handle deletion of a virtual machine, including giving back allocated disk IOPS to the correct disk CI. We created one portal offering also around update a virtual machine, for example if a machine is running based on the medium template but need more RAM we can update it to a large template. Portal offerings for deletes and updates virtual machines use a query result to show only virtual machines there the portal user are owner, to make sure no one deletes or updates the wrong virtual machine.

example of the runbook that update of virtual machines

example of the runbook that delete virtual machines

 

 

Get-FileAttachments – Download attached files from Service Manager

In the Integration Pack for Service Manager 2012 there is a activity to upload attachment, you can also look at attachments with the Get Object activity. But what you are looking is just the object, not the context of the file. For example you can see that incident IR123 has a related file (System.FileAttachment), you can see some properties of the file, like name and size, but you cant read it. This has been a challenge in a number of scenario. A couple of days ago I asked Patrik if we couldn’t do something about it and today he uploaded the first public version of a Powershell script. This Powershell script, Get-FileAttachments, will dump all file attachments to a folder. It works with both work items like incidents and configuration items like Windows computers.

One scenario that I read about in the forum a couple of days ago is the fact that attachments are not moved to the data warehouse in Service Manager. Instead you need to archive them before the work item or config item is moved to the data warehouse. There is no built in feature for this. But with a simple runbook in Orchestrator and the get-fileattachments script it is solved 🙂

This example monitor Service Manager for incidents that is updated with a new status that is equal to closed. The runbook then checks if there are any related file attachments, if there are, the script is triggered. The script that I run as a command in this example gets the SC Object GUID from the Monitor Closed Incidents activity as input. C:\SCSM_Archive is my archive folder where all attachments are stored. The scripts creates a folder named as the incident and stores all files in the folder.

Some time ago I wrote a blog post about converting incidents to service requests. That is a scenario where you also would like to include this get-fileattachments script, to make sure all attached files are moved to the other work item too. You can build activity and a integration pack on the script too, see this blog post for a example

You can download the script here.