If your Opalis database is filled up with log entries the console performance will be affected. Navigating between folders in the console, viewing workflows or console “hangs” in general can be a result of to much log data in the database.By default the log purging feature is not enable in Opalis. The result of that is that no historical data is ever removed from the Opalis database, unless it is manually configured and then either executed by clicking on the “Purge Now” button, or scheduled by selecting the “Schedule Log Purge”. In this post I will share some common practices around log purging and database maintenance for the Opalis database. The Log Purge Configuration dialog box can be accessed from the Opalis Integration Server Client by right-click and selecting Log Purge on server name in the navigation pane.
The purge feature will purge data from following three tables in the Opalis database
- Policyinstances
- Objectinstances
- Objectinstancedata
When working with the Opalis Integration Server Client you can review historical data in a couple of places
- The log tab, show you running policy instances
- The log history tab, will show you information on policy instances that have completed
- The Audit History tab will show changes made to a policy grouped by each time you click “check—in”
- The Events tab will show you events from Opalis, for example if you use the Send Platform Event activity within a policy
Both the Log tab and the Log History tab show you data from these three tables.
Both views, Log and Log History, show selected data from the POLCIINSTANCES table. When you click on a log entry, data for the objects in that policy instance are shown from the OBJECTINSTANCES table. When selecting an object, data for that object is read from the OBJECTINSTANCEDATA table.
The following query can be used to list Policy ID, Policy Name, SeqNumber, Action Server and Number of Instances. The Number of Instances is the indicator how often the Action Server executed the policy since the last log purge. This query is very helpful to see what policy is filling up your tables with historical data.
SELECT
[p].[UniqueID] AS [Policy ID],
[p].[Name] AS [Policy Name],
[prq].[SeqNumber],
[a].[Computer] AS [Action Server],
COUNT([pi].[UniqueID]) AS [Number Of Instances]
FROM [POLICY_REQUEST_HISTORY] AS prq
INNER JOIN [POLICIES] AS p ON [prq].[PolicyID] = [p].[UniqueID]
INNER JOIN [POLICYINSTANCES] AS [pi] ON [prq].[SeqNumber] = [pi].[SeqNumber]
INNER JOIN [ACTIONSERVERS] AS a ON [pi].[ActionServer] = [a].[UniqueID]
WHERE
[prq].[Active] = 1 AND
[pi].[TimeEnded] IS NOT NULL
GROUP BY
[p].[UniqueID],
[p].[Name],
[prq].[SeqNumber],
[a].[Computer]
ORDER BY
[Number Of Instances] DESC
Regarding backup of the Opalis database I have seen a couple of different solutions. The first solution is to export all policies and global settings. This will give you a simple way to restore, re-import the export files. You can do a quick re-install of the Opalis environment and import the files in case of a disaster. You can include the export part in your change management process; before you import a new policy into Opalis you also make a backup copy.
The second solution is to do a backup of the database; an advantage with this solution is that you keep all the historical data. But if you restore the database from a backup when there were policies running, you can run into some problem with state data. But you could execute the sp_StopAllRequests stored procedure to clear out any active requests and instances that aren’t desired. The restore process gets a little bit more complicated with the second solution.
With the first backup solution I recommend you, and that is a general recommendation, to keep state data/historical data in an external data store. For example we could use Service Manager to track what is happening. Opalis executes and Service Manager remembers. As soon as Opalis is done, it will drop everything, it only keeps it in memory during execution (and maybe some logs). Working with Service Manager will give us a great way to track everything. It could of course be any system that we can integrate with, for example an extra SQL database.
Thanks to my colleague Robert Riedmaier for sharing his deep knowledge around Opalis log purge, backup and database maintenance. Thanks Robert!
HI Andre
Thank for the feedback.
Confirming that i should only purge then from the Policyinstances table so that when i then go to my Opalis cliient ( I am using Opalis ver 6.0) the I will be able to navigate and the Client will not hang any further . the problme is the Log Tab only not the Log History tab.
Thanks for the help
Thanks diana
Hi, dont Think you need it, see info below
The log purge job purges jobs from the POLICYINSTANCES, OBJECTSINSTANCES, and OBJECTSINSTANCEDATA tables in the Orchestrator database. The Historical Data pane in the Runbook Designer console shows data from these tables:
* The Log and Log History views show selected data from the POLICYINSTANCES table.
* When you click on a log entry, the OBJECTSINSTANCES table shows data for the objects in that policy instance.
* When selecting an object, data for that object is read from the OBJECTSINSTANCEDATA table.
Thanks for the article on Opalis log Purge a very good article.
The Log Purge on the navigation pane is what I use but it ONLY purges the policy instances on the Log History Tab and NOT the Log Tab.
The Log Tab is building up and I am having a problem to navigate and view workflows.
May I purge the policy instances directly in the Opalis Database . The tables that I would purge according to date and policy ID would be :
Policyinstances
Objectinstances
Objectinstancesdata
Please confirm that i can go ahead or perform another function to purge the policy instances on the Log Tab.
Do you have a SQL script to purge these tables above parameter driven.
Thansk very much Diana
Excellent notes Ander’s. you just save my day! cheers