Menu Close

Advanced MP Authoring -MPU Nov 2019

You can watch my session and all the sessions presented at this link:

https://www.silect.com/mp-university-nov2019-replay/

Previous sessions available at:

https://kevinholman.com/2019/07/15/advanced-mp-authoring-mpu-may-2019/

11 Comments

  1. Jukka-Pekka Grohn

    Thanks for nice fragment, but… I should import new root and ca certificates to managed servers. I found that I couldn’t add certificates with mpbutil.exe to .mpb file. I could only add that script file, which was configured in fragment. Is there way to expand Your fragment so that it can also deliver needed “support files” in management pack?

    • Kevin Holman

      You should be able to distribute any file, such as a root CA, using the fragment. Just import the fragment multiple times, one for each file, or modify the fragment as you need. However, you should not distribute individual machine certs using a MP, as this will distribute the same certs to all agents.

  2. Orit

    Thank you for the great demo.
    It would be helpful if we could use the file destitution based on some parameters and only push the relevant files to the agent (for example, based on a group membership,server FQDN name or NetBIOS name + domain name )
    for example, if we wanted to deploy a certificate based on the file name – Cert_serverFQDN.pfx the agent would download only it’s certificate and not all servers certificates (which can reduce the impact on the network)

  3. Omer Roth

    Thank you Kevin for another great MPU!
    I have a few questions regarding data sources:
    1. Since basically data sources are built on-top of each other until you reach assembly references, would a long chain of data source dependency impact the runtime of it?
    If we look at each data source as a function written in code – you could clearly say that if you call a function that calls another, it would be (to some extent) a waste of time resources.

    2. Will you prefer using a built-in data sources and probes over customized scripts?
    Let’s say you can use both options, but for some reason using the built-in ones complicates the writing a bit, while you could simply use a scheduler with a powershell probe to do everything in the script.
    Is there any advantage to using built-in data sources over just using scripts every time?

    3. As I’m writing to you my favorite site for examples and documentation is down (systemcenter.wiki, peace be upon it), is there any other place you’d recommend aspiring authors to go to for that kind of documentation? It’s a bit tedious opening every sealed MP in order to look for examples 🙂

    Thanks in advance!

    • Kevin Holman

      1. It is so miniscule it becomes relevant, from a monitoring perspective. All the relevant related modules are loaded into memory anyway. I do find it to often be way over-complicated, but efficiency is almost a zero concern. The actual module’s efficiency (especially a script) is for more of a concern than the number of datasources and reference and pass data to other datasources.

      2. I always prefer a “native module” to a script. While we have made scripts incredibly efficient, they will never scale as highly as a native module. It becomes a silly argument at some point, if your script runs in less than a tenth of a second and consumes almost zero CPU or memory…. but native modules are almost always more efficient. That said, if I find a native module to be poorly documented, or difficult to use, I have no problem using a script. I give more focus on workflows that need to run VERY frequently, such as every 60 seconds or less. If it runs every 5 minutes or more, I really don’t care.

      3. I dont know – thats the real repository. I personally keep an unsealed copy of all MP’s that I work with, to find examples…. its pretty easy to keep up with.

  4. Benjamin M

    Would you consider doing an authoring tutorial on how to create a simple custom monitoring template? Not using the existing templates (e.g. Web Application Availability Monitoring) but creating our own for those small monitors that need to be slightly customized dozens of times over but overrides are cumbersome or not sufficient for.

    • Kevin Holman

      I’ll be honest, thats over my capability. Templates are MUCH more complex than MP’s, and most of the templates that we have authored internally have heavily borrowed from existing template code. Since this is not an area most customers need, I haven’t ever developed that skillset, and Microsoft doesn’t support template authoring directly…. even though there is no reason you can’t, and many 3rd party MP’s do this.

      • Benjamin M

        Thanks for the reply! It’s an understandable position. I’m in the process of migrating our systems off of Orion onto SCOM and Orion has a DNS User Experience monitor template that is in heavy use here but which SCOM does not have an equivalent replacement for. I can create an equivalent monitor all day long that duplicate that (and even better, doesn’t need to run on a DNS server itself), even create a reusable fragment, but it’d be a lot easier to make a template that my users could just run a wizard on than explain how to replace info on an XML fragment (I know find and replace is simple, but most people are gonna have their brain vaporlock when they see the XML).

        • Kevin Holman

          Right on – makes perfect sense. That’s a great scenario for a template. Most customers don’t like to delegate Authoring rights to end users, because they always mess things up, save them in the wrong place, etc. So I see more scenarios like using the registry to discover watchers, and discover the end targets to monitor, etc. Then you can have a single MP, and customers can just change their registry on their machines to watch, and monitor the targets. Or customers will use a CSV solution where the customer just edits a centralized CSV to add/remove monitoring, and the MP discovers from this CSV. Lastly, the most advanced customers I work with will have internal developers create a web based front end to accept information from end users, and control their input, then this is used to modify/create MP’s and import them.

Leave a Reply

Your email address will not be published. Required fields are marked *