Menu Close

You probably have a ton of old event data in your Data Warehouse


 

image

 

Prior to SCOM 2012 R2 UR7, we had an issue where we did not groom out old data from the Event Parameter and Event Rule tables in the DW.  This will show up as these tables growing quite large, especially the event parameter tables.  They will never groom out the old, orphaned data.

It isn’t a big deal, but if you’d like to free up some space in your Data Warehouse database – read on.

 

I’ll just go out and say that ANYONE who ever ran a SCOM management group prior to SCOM 2012 R2 UR7, is affected.  How much just depends on how many events you were collecting and shoving into your DW.

Once you apply UR7 or later, this issue stops, and the normal grooming will groom out the data as events get groomed.  HOWEVER – we will never go back and clean out the old, already orphaned event parameters and event rules.

 

Nicole was the first person I saw write about this issue:

https://blogs.msdn.microsoft.com/nicole_welch/2016/01/07/scom-2012-large-event-parameter-tables/

 

Essentially – to know if you are affected, there are some SQL statements you can run…. but I wrote my own.  These take a long time to run – but it gives you an idea of how many events are in scope to be groomed.

 

SELECT count(*) from event.vEventParameter ep WHERE ep.EventOriginId NOT IN (SELECT distinct EventOriginId from event.vEvent) select count(*) from event.vEventRule er WHERE er.EventOriginId NOT IN (SELECT distinct EventOriginId from event.vEvent)

 

 

Nicole has a stored procedure listed on her site – where you can run that to create the stored proc – then use the statement calling the sproc with a “max rows to groom” parameter.   It works well and I recommend it.

 

Alternatively – you can just run this as a straight SQL query.  I will post that below:

I set MaxRowsToGroom hard coded to 1,000,000 rows.  I found this runs pretty quick and doesn’t use a lot of transaction log space.  You can adjust this depending on how much cleanup you need to do if you prefer the query approach, or just use the stored proc and the loop command in the blog post linked above.

 

UPDATE 5/26:  I changed the script to allow for customers who have multiple Event Parameter tables and were running into the error:

Msg 512, Level 16, State 1, Line 7

Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.

 

I don’t have a DB with multiple Event Param tables – however I simply dumped the TableGuids to a temp table and loop based on each row in the temp table.  This should work.

 

ALWAYS TAKE A BACKUP OF YOUR DATABASE FIRST!

 

DECLARE @MaxRowsToGroom int ,@RowsDeleted int SET NOCOUNT ON; SET @MaxRowsToGroom = 1000000 DECLARE @RuleTableName sysname ,@DetailTableName sysname ,@ParamTableName sysname ,@DatasetId uniqueidentifier = (select DatasetId from StandardDataset where SchemaName = 'Event') ,@TableGuid uniqueidentifier ,@Statement nvarchar(max) ,@schemaName sysname = 'Event' IF OBJECT_ID('tempdb..#Tables') IS NOT NULL DROP TABLE #Tables SELECT RowNum = ROW_NUMBER() OVER(ORDER BY TableGuid) ,TableGuid INTO #Tables FROM StandardDatasetTableMap where DatasetId = (select DatasetId from StandardDataset where SchemaName = 'Event') DECLARE @MaxRownum INT SET @MaxRownum = (SELECT MAX(RowNum) FROM #Tables) DECLARE @Iter INT SET @Iter = (SELECT MIN(RowNum) FROM #Tables) WHILE @Iter <= @MaxRownum BEGIN SET @TableGuid =(SELECT TableGuid FROM #Tables WHERE RowNum = @Iter) --BEGIN TRY BEGIN TRAN SELECT TOP 1 @RuleTableName = BaseTableName + '_' + REPLACE(CAST(@TableGuid AS varchar(50)), '-', '') FROM StandardDatasetAggregationStorage WHERE (DatasetId = @DatasetId) AND (AggregationTypeId = 0) AND (DependentTableInd = 1) AND (TableTag = 'Rule') SET @Statement = 'DELETE TOP (' + CAST(@MaxRowsToGroom AS varchar(15)) + ')' + ' FROM ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@RuleTableName) + ' WHERE (EventOriginId NOT IN (SELECT EventOriginId FROM Event.vEvent)) ' execute (@Statement) SELECT TOP 1 @ParamTableName = BaseTableName + '_' + REPLACE(CAST(@TableGuid AS varchar(50)), '-', '') FROM StandardDatasetAggregationStorage WHERE (DatasetId = @DatasetId) AND (AggregationTypeId = 0) AND (DependentTableInd = 1) AND (TableTag = 'Parameter') SET @Statement = 'DELETE TOP (' + CAST(@MaxRowsToGroom AS varchar(15)) + ')' + ' FROM ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@ParamTableName) + ' WHERE (EventOriginId NOT IN (SELECT EventOriginId FROM Event.vEvent)) ' execute (@Statement) SET @RowsDeleted = @@ROWCOUNT COMMIT SET @Iter = @Iter + 1 select @RowsDeleted as RowsDeleted END IF OBJECT_ID('tempdb..#Tables') IS NOT NULL DROP TABLE #Tables

 

 

I do recommend you clean this up.  It doesn’t hurt anything sitting there, other than potentially making any event based reports run slower, but the big impact to me is just dealing with such a large DW, backups, restores, and cost of ownership of a database that big, for little reason.

Make sure you update statistics when you are done, if not also a full DBReindex.   To update statistics – run:    exec sp_updatestats

 

 

Here is an example of my before and after:

Before:

image

 

After:

image

 

Trimmed from 3.3 GB to 117 MB!!!!!   If this were a large production environment, this could be a substantial amount of data.

 

 

And remember – most collected events are worthless to begin with.  As a tuning exercise – I recommend disabling MOST of the out of the box event collections, and also reduce your event retention in the DW:

https://kevinholman.com/2009/11/25/tuning-tip-turning-off-some-over-collection-of-events/

https://kevinholman.com/2010/01/05/understanding-and-modifying-data-warehouse-retention-and-grooming/

 

Leave a Reply

Your email address will not be published. Required fields are marked *