Menu Close

The 31552 event, or “why is my data warehouse server consuming so much CPU?”

A very common customer scenario – is where all of a sudden you start getting these 31552 events on the RMS, every 10 minutes.  This drives a monitor state and generates an alert when the monitor goes red.

image

 

However – most of the time my experience is that this alert gets “missed” in all the other alerts that OpsMgr raises throughout the day.  Eventually, customers will notice the state of the RMS is critical, or their availability reports take forever or start timing out, or they notice that CPU on the data warehouse server is pegged or very high.  It maybe be several days before they are even aware of the condition.

 

image

image

 

The 31552 event is similar to below:

Date and Time: 8/26/2010 11:10:10 AM 
Log Name: Operations Manager 
Source: Health Service Modules 
Event Number: 31552 
Level: 1 
Logging Computer: OMRMS.opsmgr.net 
User: N/A 
Description: 
Failed to store data in the Data Warehouse. Exception ‘SqlException’: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. One or more workflows were affected by this. Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance Instance name: State data set Instance ID: {50F43FBB-3F59-10DA-AD1F-77E61C831E36} Management group: PROD1
 

The alert is:

Data Warehouse object health state data dedicated maintenance process failed to perform maintenance operation

Data Warehouse object health state data dedicated maintenance process failed to perform maintenance operation. Failed to store data in the Data Warehouse. 
Exception ‘SqlException’: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

One or more workflows were affected by this.

Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance 
Instance name: State data set 
Instance ID: {50F43FBB-3F59-10DA-AD1F-77E61C831E36} 
Management group: PROD1

 

 

Now – there can be MANY causes of getting this 31552 event and monitor state.  There is NO SINGLE diagnosis or solution.  Generally – we recommend you call into MS support when impacted by this so your specific issue can be evaluated.

 

The most common issues causing the 31552 events seen are:

  • A sudden flood (or excessive sustained amounts) of data to the warehouse that is causing aggregations to fail moving forward.
  • The Exchange 2010 MP is imported into an environment with lots of statechanges happening.
  • Excessively large ManagedEntityProperty tables causing maintenance to fail because it cannot be parsed quickly enough in the time allotted.
  • Too many tables joined in a view or query (>256 tables) when using SQL 2005 as the DB Engine
  • Poor SQL performance issues (typically disk I/O latency)
  • Using SQL standard edition, you might see these randomly at night, during maintenance as online indexing is not supported using SQL standard edition.
  • Messed up SQL permissions
  • Too much data in the warehouse staging tables which was not processed due to an issue and is now too much to be processed at one time.
  • Random 31552’s caused my DBA maintenance, backup operations, etc..

If you think you are impacted with this, and have an excessively large ManagedEntityProperty table – the best bet is to open a support case.  This requires careful diagnosis and involves manually deleting data from the database which is only supported when directed by a Microsoft Support Professional.

 

The “too many tables” is EASY to diagnose – because the text of the 31552 event will state exactly that.  That is easily fixed by reducing data warehouse retention of the affected dataset type.

 

 

Now – the MOST common scenario I seem to run into – actually just happened to me in my lab environment, which prompted this article.  I this this happen in customer environments all too often.

I had a monitor which was based on Windows Events.  There was a “bad” event and a “good” event.  However – something broke in the application – and cause BOTH events to be entered in the application log multiple times a second.  We could argue this is a bad monitor, or a defective logging module for the application…. but regardless, the condition is a monitor of ANY type starts flapping, changing from good to bad to good WAY too many times.

What resulted – was 21,000 state changes for my monitor, within a 15 MINUTE period.

image

 

At the same time, all the aggregate rollup, and dependency monitors, were also having to process these statechanges…. which are also recorded as a statechange event in the database.  So you can see – a SINGLE bad monitor can wreak havoc on the entire system… affecting many more monitors in the health state rollup.

 

While the Operations Database handles these inserts quite well, while the DataWarehouse does not.  Each statechangeevent is written to both databases.  The standard dataset maintenance job is kicked off every 60 seconds on the warehouse.  This is called by a rule (Standard Data Warehouse Data Set maintenance rule) which targets the “Standard Data Set” class, which executes a specialized write action to start maintenance on the warehouse.

What is failing here – is that the maintenance operation (which also handles hourly and daily dataset aggregations for reports) is failing to complete in the default time allotted.  Essentially – there are SO many statechanges in a given hour – that the maintenance operation cannot complete and times out, rolling back the transaction.  This is a never-ending loop, which is why it never seems to “catch up”… because a single large transaction that cannot complete blocks this being committed to the database.  Under normal circumstances – 10 minutes is plenty of time to complete these aggregations, but under a flood condition, there are too many statechanges to calculate the time in state for each monitor and instance, to complete.

So – the solution here is fairly simple:

  • First – solve the initial problem that caused the flood.  Ensure you don’t have too many statechanges constantly coming in that are contributing to this.  I discuss how to detect this condition and rectify it HERE.
  • Second – we need to disabled to standard built in maintenance that is failing, and run it manually, so it can complete with success.

For the second step above – here is the process:

1.  Using the instance name section in the 31552 event, find the dataset that is causing the timeout (See the highlighted section in the event below)

Date and Time: 8/26/2010 11:10:10 AM 
Log Name: Operations Manager 
Source: Health Service Modules 
Event Number: 31552 
Level: 1 
Logging Computer: OMRMS.opsmgr.net 
User: N/A 
   Description: 
Failed to store data in the Data Warehouse. Exception ‘SqlException’: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. One or more workflows were affected by this.

Workflow name: Microsoft.SystemCenter.DataWarehouse.StandardDataSetMaintenance 
Instance name: State data set 
Instance ID: {50F43FBB-3F59-10DA-AD1F-77E61C831E36} 
Management group: PROD1

2.  Create an override to disable the maintenance procedure for this data set:

  • In the OpsMgr console go to Authoring-> Rules-> Change Scope to “Standard Data Set”
  • Right click the rule “Standard Data Warehouse Data Set maintenance rule” > Overrides > Override the rule > For a specific object of class: Standard Data Set
  • Select the data set that you found from the event in step 1.
  • Check the box next to Enabled and change the override value to “False”, and then apply the changes.
  • This will disable dataset maintenance from running automatically for the given dataset type.

3.  Restart the “System Center Management” service on the RMS.  This is done to kill any maintenance already running, and ensure the override is applied immediately.

4.  Wait 10 minutes and then connect to the SQL server that hosts the OperationsManagerDW database and open SQL Management Studio.

5. Run the query below replacing the highlighted portion with the name of the dataset from step 1.

**Note: This query could several hours to complete.  This is dependent on how much data has been flooded to the warehouse, and how behind it is in processing.  Do not stop the query 
prior to completion.

USE [OperationsManagerDW] DECLARE @DataSet uniqueidentifier SET @DataSet = (SELECT DatasetId FROM StandardDataset WHERE SchemaName = 'State') EXEC StandardDatasetMaintenance @DataSet

 

 

6. Once the query finishes, delete the override configured in step 2.

7. Monitor the event log for any further timeout events.

**Note:  One of the biggest challenges is often getting maintenance to stop running on its own, so you can run it manually.  The reason this is important, is that one of the first things that StandardDatasetMaintenance does, is to check to see if it is already running.  If it is, we cannot run it manually and trying to run it manually will simply cause it to start and complete in about a second.  If you are having trouble getting it to run manually, make sure you have created the override, and waited at least 10 minutes for the previous job to time out.  Some customers elect to restart the healthservice on all management servers to kill the previous job, or go as far as to restart the SQL service.  Another tactic is to disable maintenance from ALL objects of class “Dataset”, then wait 10 minutes.

 

In my case – my maintenance task ran for 25 minutes then completed.  In most customer environments – this can take several hours to complete, depending on how powerful their SQL servers are and how big the backlog is.  If the maintenance task returns immediately and does not appear to run, ensure your override is set correctly, and try again after 10 minutes.  Maintenance will not run if the warehouse thinks it is already running.

***Note:  Now – this seemed to clear up my issue, as immediately the 31552’s were gone.  However – at 2am, they came back, every 10 minutes again and my warehouse CPU was spiked again.  My assumption here – is that it got through the hourly aggregations flood, and now it was trying to get through the daily aggregations work and had the same issue.  So – when I discovered this was sick again – I used the same procedure above, and this time the job took the same 25 minutes.  I have seen this same behavior with a customer �� where it took several days to “plow through” the flood of data to finally get to a state where the maintenance would always complete in the 10 minute time period.

 

Cory Delamarter has written a good query, to figure out how far behind you are.  I will include it below:

--BEGIN QUERY USE OperationsManagerDW; WITH AggregationInfo AS ( SELECT AggregationType = CASE WHEN AggregationTypeId = 0 THEN 'Raw' WHEN AggregationTypeId = 20 THEN 'Hourly' WHEN AggregationTypeId = 30 THEN 'Daily' ELSE NULL END ,AggregationTypeId ,MIN(AggregationDateTime) as 'TimeUTC_NextToAggregate' ,COUNT(AggregationDateTime) as 'Count_OutstandingAggregations' ,DatasetId FROM StandardDatasetAggregationHistory WHERE LastAggregationDurationSeconds IS NULL GROUP BY DatasetId, AggregationTypeId ) SELECT SDS.SchemaName ,AI.AggregationType ,AI.TimeUTC_NextToAggregate ,Count_OutstandingAggregations ,SDA.MaxDataAgeDays ,SDA.LastGroomingDateTime ,SDS.DebugLevel ,AI.DataSetId FROM StandardDataSet AS SDS WITH(NOLOCK) JOIN AggregationInfo AS AI WITH(NOLOCK) ON SDS.DatasetId = AI.DatasetId JOIN dbo.StandardDatasetAggregation AS SDA WITH(NOLOCK) ON SDA.DatasetId = SDS.DatasetId AND SDA.AggregationTypeID = AI.AggregationTypeID ORDER BY SchemaName DESC --END QUERY

 

This will output how many aggregations you are behind on, and of which type.  Very handy to help calculate how long it will take to get this cleaned up!

Additionally – Cory has written a loop command to keep running maintenance.  This is important as each run of maintenance will only process a single hourly or daily aggregation.  Once you have figured out how long each run takes, you can get an idea of how long it will take to catch up – and kick this looping script off:

--BEGIN QUERY USE OperationsManagerDW DECLARE @DataSetName varchar(50) -- Set this string to the "SchemaName" value that is behind SET @DataSetName = 'State' WHILE ( (SELECT COUNT(AggregationDateTime) FROM StandardDatasetAggregationHistory AS ah WITH(NOLOCK) INNER JOIN StandardDataSet AS ds WITH(NOLOCK) ON ah.DatasetId = ds.DatasetId WHERE ds.SchemaName = @DataSetName AND LastAggregationDurationSeconds IS NULL) > 1 ) BEGIN BEGIN TRANSACTION; USE [OperationsManagerDW] DECLARE @DataSet uniqueidentifier SET @DataSet = (SELECT DatasetId FROM StandardDataset WHERE SchemaName = @DataSetName) EXEC StandardDatasetMaintenance @DataSet COMMIT TRANSACTION; END --END QUERY

Another step in the process, is to increase the time allowed for aggregations to complete.  If you had a terrible state storm, it might take HOURS for your aggregations to process.  However, if you are just reaching a level where your state changes are just too heavy, or your SQL IO is not high enough, you might consider increasing the timeout for aggregations to run.

The dataset maintenance runs every 60 seconds.  In the State Dataset, we delay starting maintenance for 5 minutes, and then we have a 5 minute default timeout.  This is why you might see a 31552 event every 10 minutes like clockwork.

What we can do, is to create a new registry key and value on EVERY management server, with a new timeout value.

Open the registry, and locate HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Operations Manager\3.0    Create a new Key under “3.0” named “Data Warehouse”.  Then create a new DWORD value named “Command Timeout Seconds” with a value of 1800 seconds.  This will take the default 5 minute timeout to 30 minutes:    Recommended registry tweaks for SCOM 2016 and 2019 management servers – Kevin Holman’s Blog

 

If you cannot complete your aggregations any faster than 30 minutes under normal conditions, something is very wrong and you need to resolve the root cause.

 

This is a good, simple process to try to resolve the issue yourself, without having to log a call with Microsoft first.  There is no risk in attempting this process yourself – to see if it can resolve your issue.

If you are still seeing timeout events, there are other issues involved.  I’d recommend opening a call up with Microsoft that that point.

Again – this is just ONE TYPE of (very common) 31552 issue.  There are many others, and careful diagnosis is needed.  Never assume someone else’s fix will resolve your specific problem, and NEVER edit an OpsMgr database directly unless under the direct support of a Microsoft support engineer.

 

 

(***Special thanks to Chris Wallen, a Sr. Support Escalation Engineer in Microsoft Support for assisting with the data for this article)

12 Comments

  1. Charlez

    Hi Kevin,

    Does this apply to newer versions also (later then SCOM 2012)? Especially the query to start the dataset maintenance?

    Br

  2. Senad Sadikovic

    Hi, does this issue apply to this event aswell? I’m seeing these quite often on just one of the two management servers 31552 Health Service Modules

    Failed to store data in the Data Warehouse.
    Exception ‘SqlException’: Procedure or function ManagementPackInstall has too many arguments specified.

    One or more workflows were affected by this.

    Workflow name: Microsoft.SystemCenter.DataWarehouse.Synchronization.Configuration
    Instance name: Data Warehouse Synchronization Service
    Instance ID: {A1139DD0-DEB8-6B52-FD3B-B4F158817B78}
    Management group: [MGMT Group Name]

    • Kevin Holman

      “ManagementPackInstall has too many arguments specified” is an error you will see when an update rollup was not applied correctly or failed. This means the UR level of SCOM does not match the database update level. What version and UR level is SCOM? What is the reported Database version in the Administration pane?

      • Senad Sadikovic

        Hi Kevin, thanks for the reply and sorry for my late reply 😀

        Yes i have checked and its the following:

        SCOM Version: 10.19.10050.0
        Database: 10.19.103110.0

        I was already planning on upgrading them to the UR3 so hopefully this will resolve the problem. But i guess the SQL has to be on the same level aswell?

        Thank you 🙂

        • Kevin Holman

          The problem is your OpsDB is patched with UR2, however your DW is not. Your DW is at UR1 level. This is unsupported.

          Simply log into a management server, and find your \Program Files\Microsoft System Center\Operations Manager\Server\SQL Script for Update Rollups\UR_Datawarehouse.sql file. Run this file manually against the DW database. Then restart the healthservice on your management server throwing that error and see if this resolves it.

          • Senad Sadikovic

            Thanks! Didn’t find the time to check it out today but i’ll try it asap and report my result here.

          • Senad Sadikovic

            So i executed the code inside of Program Files\Microsoft System Center\Operations Manager\Server\SQL Script for Update Rollups\UR_Datawarehouse.sql …against the DW database and the alert resolved itself quickly!

            Worked as a charm, thanks again Kevin! Much appreciated!

  3. Michael

    Hi Kevin,

    Im having a similar problem in our 1807 environment where one MS is getting constant 2115 events, as well as 31551 events.

    Failed to store data in the Data Warehouse. The operation will be retried.
    Exception ‘SqlTimeoutException’: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
    Timed out stored procedure: MaintenanceModeChange

    One or more workflows were affected by this.

    Workflow name: Microsoft.SystemCenter.DataWarehouse.Synchronization.MaintenanceMode
    Instance name: Data Warehouse Synchronization Service
    Instance ID: {36931632-51C3-621B-2070-4D385B64DF19}
    Management group: #######

    It seems to always be the MM workflow that is timing out, however intermittently there are also some 31554 events indicating that it worked. Can I follow a similar workaround by disabling this via a rule & then running manually?

    regards
    Michael

  4. Kamal Sharma

    Hi Kevin/ Team

    when i am running SQL query I got below error on SCOM 2012 R2 DBW database .How to correct this ?

    Msg 102, Level 15, State 1, Line 3
    Incorrect syntax near ‘‘’.

    USE [OperationsManagerDW]
    DECLARE @DataSet uniqueidentifier
    SET @DataSet = (SELECT DatasetId FROM StandardDataset WHERE SchemaName = ‘Event‘)
    EXEC StandardDatasetMaintenance @DataSet

  5. Srinivas

    Hi Kevin/ Team,

    Running with SCOM 2019 UR6, event id 31552, can you suggest me on this.

    Failed to store data in the Data Warehouse.
    Exception ‘SqlException’: Sql execution failed. Error 547, Level 16, State 0, Procedure ManagementPackGroom, Line 416, Message: The DELETE statement conflicted with the REFERENCE constraint “FK_RelationshipTypeManagementPackVersion_ManagedEntityType_TargetType”. The conflict occurred in database “OperationsManagerDW”, table “dbo.RelationshipTypeManagementPackVersion”, column ‘TargetManagedEntityTypeRowId’.

    One or more workflows were affected by this.

    Workflow name: Microsoft.SystemCenter.DataWarehouse.Synchronization.Configuration
    Instance name: Data Warehouse Synchronization Service

Leave a Reply

Your email address will not be published.