In this example – we will take the most BASIC PowerShell script possible, and show how we can us it to collect Performance data in a SCOM rule.
We will use Silect’s free tool – MPAuthor.
First – the script:
I will simply count the number of items in the C:\Windows\Temp directory:
$TempCount = (Get-ChildItem C:\Windows\Temp).Count $TempCount
Here is the typical script output:
Now – the idea here is – what is the minimal amount of work required to get this into SCOM?
We need to add a few items to the script.
First, we need to load the MOMScript API for PowerShell:
$api = New-Object -comObject “MOM.ScriptAPI”
Next – since this is a Performance rule – we need a way to output the data in a way that SCOM understands. These are called PropertyBags and here is an example for this use case, to load the PropertyBag type into a variable:
$bag = $api.CreatePropertyBag()
Lastly – we need to output the data we collected into a bag, which is a pair of items separated by a comma:
$bag.AddValue(“FilesInTemp”,$TempCount)
I just gave my perf counter a made up name, “FilesInTemp” and $TempCount represents the numerical value from my previous simple script.
Lastly – we output the PropertBag:
$bag
Now lets see the script:
#Load the MOMScript API and the PropertBag provider $API = New-Object -comObject "MOM.ScriptAPI" $bag = $api.CreatePropertyBag() #Main script $TempCount = (Get-ChildItem C:\Windows\Temp).Count #Add the data into the PropertyBag $bag.AddValue("FilesInTemp",$TempCount) #Output the PropertyBag data for SCOM consumption: $bag
These are the minimal changes to a Powershell script to use it in a SCOM rule. I want to add one more section – which will create an event log entry for troubleshooting purposes:
$api.LogScriptEvent(“CountFilesInTemp.ps1″,3280,0,”Count Files in Temp Directory Script is starting”)
and
$api.LogScriptEvent(“CountFilesInTemp.ps1″,3281,0,”Count Files in Temp Directory Script is complete. Number of files is $TempCount”)
Now lets see the final script:
#Load the MOMScript API and the PropertBag provider $API = New-Object -comObject "MOM.ScriptAPI" $bag = $api.CreatePropertyBag() #Log an event that our script is starting $api.LogScriptEvent("CountFilesInTemp.ps1",3280,0,"Count Files in Temp Directory Script is starting") #Main script $TempCount = (Get-ChildItem C:\Windows\Temp).Count #Add the data into the PropertyBag $bag.AddValue("FilesInTemp",$TempCount) #Log an event that our script is complete $api.LogScriptEvent("CountFilesInTemp.ps1",3281,0,"Count Files in Temp Directory Script is complete. Number of files is $TempCount") #Output the PropertyBag data for SCOM consumption: $bag
We are ready to bring this into MPAuthor.
Load MPAuthor, and create a new empty MP.
New rule:
Give the script a name an paste the script body in:
On the Performance Mapper screen, we can put in here whatever we want…. we can use discovered properties or just plain text. Here is my example:
I made up static text for Object, Counter, and Instance, that I want for my Perf data in SCOM. The “Value” I used is “FilesInTemp” which comes from whatever text you put in the PropertyBag (highlighted):
$bag.AddValue(“FilesInTemp“,$TempCount)
Click Next, and choose a Target class for the rule to run on. I always choose “Windows Server Operating System” as a generic target.
Then provide a good ID and Display Name:
Don’t create a view at this time. However we DO want to save data to the Data Warehouse for reporting:
This is a script – so we need a schedule. For testing – I will set this to run every 60 seconds. In production, we would NEVER want a PowerShell script to run on all systems that often.
Looks good:
Lets save the MP and review the XML: The primary MP entities created were:
- Datasource Module
- Probe Action Module
- Rule
The probe action module utilizes Microsoft.Windows.PowerShellPropertyBagTriggerOnlyProbe and this is basically just the script itself.
The DataSource module is a composite datasource combining the System.Scheduler simple recurring schedule, and the value is passed to this by the Rule. It also contains the Probe Example.PowerShell.Script.CountFilesInTemp.Rule.ProbeActionModuleType
The Rule is super simple. It contains the values for Timeout, and Interval, calls the DataSource, and adds a Condition Detection which maps the data output by the script to Performance data using System.Performance.DataGenericMapper. This – it has two write actions, which simply collect the performance data to the OpsDB and the DW:
<WriteAction ID=”Microsoft.SystemCenter.CollectPerformanceData” TypeID=”SC!Microsoft.SystemCenter.CollectPerformanceData” />
<WriteAction ID=”Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData” TypeID=”SCDW!Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData” />
At this point – lets import the MP into our lab SCOM environment for review.
On an agent – you can review the SCOM event log to see the progress:
Log Name: Operations Manager
Event ID: 1200
Description: New Management Pack(s) requested. Management group “OMMG1″, configuration id:””.Log Name: Operations Manager
Event ID: 1201
Description: New Management Pack with id:”Example.PowerShell.Script”, version:”1.0.0.1″ received.Log Name: Operations Manager
Event ID: 1210
Description: New configuration became active. Management group “OMMG1″, configuration id:”b845c23195a8ce07743ceae1aee1f2d9,fc14a16523ee42f1b45b0ca60c727751:0017BC89”.Log Name: Operations Manager
Event ID: 3280
Description: CountFilesInTemp.ps1 : Count Files in Temp Directory Script is startingLog Name: Operations Manager
Event ID: 3281
Description: CountFilesInTemp.ps1 : Count Files in Temp Directory Script is complete. Number of files is 807
Based on those last two events it looks good! Lets create a performance view and see what’s coming in:
Success!
I’ll attach my Example MP below.
Hello,
Is there a way to get more than one Performance counter in one script ?
e.g. Disksize, UsedDiskGB, …
This would be a bad idea. This is because of how SCOM manages performance data, it organizes it in the databases by rule. Technically you could create a rule to collect perf data from a script, output multiple propertybags and pass each to the collection writeaction. However, this would make a mess in the database because we will aggregate data on a per-rule basis, not per counter. So we will munge all this data together. By design, each per object/counter/instance collected – should have its own rule, regardless of the data source.
Thanks
I notice you said each object/counter/instance needs it’s own rule.
Does this mean that a single script rule can only return a single property bag, so can only collect data for a single instance?
The reason I ask is I’m trying to surface perf data (multiple counters) from an external storage array that can only give it to me by polling regularly and returning it in one go for all counters on all disks and volumes. I want to split this up “per disk/volume” as the instance and “per counter”.
I read somewhere that if I wanted to collect multiple counters from a single script I needed to create a management pack with a script data source that fetched all this data and turned it into a a set of dataitem, each a property bag. The rules would then be one per counter using this data source and SCOM would “cookdown” so the script didn’t get run per counter.
I suspect there is more to it, however!
Using a single datasource and then using cookdown is the way to go for that. One since script execution, and lots of performance collection rules use the same data output, just filter using a condition detection. Pay attention to not break cookdown by passing any unique data to the datasource.
Hi ,
I tried following this and ended up with a working rule that seems to add a new instance every time it collects rather than adding the next value to the same instance …
I am targeting one server and after 20 minutes have 20 rows in my perf graph each with and individual value ..
what have I got the worng way round…..
You are likely targeting a multiple instance object class. What are you targeting?
it is a memory value obtained from a webapi coming from an elastic search device cluster
heres my script
#Load the MOMScript API and the PropertBag provider
$API = New-Object -comObject “MOM.ScriptAPI”
$bag = $api.CreatePropertyBag()
$api.LogScriptEvent(“ElasticclusterJVMHeapUsedBytes.ps1″,3280,0,”Script is starting”)
#Main script
$JVMHeapSize=(Invoke-RestMethod -Uri “http://servername:9200/_cluster/stats”).nodes.jvm.mem.heap_used_in_bytes
#Add the data into the PropertyBag
$bag.AddValue(“JVMHeapSize”,$JVMHeapSize)
$api.LogScriptEvent(“ElasticclusterJVMHeapUsedBytes.ps1″,3281,0,”Script is complete. Mem Size is $JVMHeapSize”)
#Output the PropertyBag data for SCOM consumption:
$bag
But what CLASS are you targeting this rule to?
many Thanks it was targeted to Windows Computer, I have adjusted this to windows server operating system and all is well . many Thanks for your help
*NEVER* target any workflows to Windows Computer. That’s just a best practice. 🙂
If I wan’t also do monitor with alerts, should I then do new script or can I use same one?
It is best to use the same one – but to have a dedicated datasource module. Then the rule and the monitor can share the same script datasource, but only execute it once. This gets a bit more complicated, to make cookdown work.
Examples:
https://social.technet.microsoft.com/wiki/contents/articles/15218.operations-manager-management-pack-authoring-cookdown.aspx
https://channel9.msdn.com/Series/System-Center-2012-R2-Operations-Manager-Management-Packs/Mod23
Any guides on doing this in VSAE please? I can’t expand the MP XML in this article to see all the configuration / datasource info.
Why would I see this error when creating a similar script? “Module was unable to convert parameter to a double value Original parameter”
I’d have to see your XML, but most common is that you are evaluating a string value with an expression that uses greater than/less than. You need to ensure what you are collecting is defined as an Integer or Double value and passed in the correct format.
If you just run the basic script directly on a Windows SCOM Agent to generate a property bag output, is anything reaching SCOM at this point? I’m really interested in what is happening in the script when we make a new “ComObject” and pass parameters to it from our script? At this point, are we leveraging the SCOM agent in anyway? Or are the $API and $BAG variables only used when this script is integrated into a SCOM Performance Collection Rule?
Nothing happens when you just run the script. The bags are dumped to the command line output. If the script is not called as part of the monitoringhost.exe process, nothing is transferred to the queues, nor processed by the agent healthservice to the parent healthservice.
Thanks Kevin!
Hi!
Kevin, is it possible to summarize all this counters if i want total number of files on chart?