I've been trying to exclude some VMs from discovery using an override to the ExcludeVMsByVMName of Veeam Stage 2 - Virtual Machine discovery rule.
My override value contains multiple arguments with wildcards, like this :
ING*,Q*,PRE*,INT*,INP*
I then triggered a full topology rebuild (at least 3 of them actually over the course of the past 2 weeks), but some of these VMs (INT* and PRE*) are still discovered (viewable in the All VMs view).
I have not found any event in the collectors logs that I could relate to this behavior.
OK thanks - did you also Enforce the override? Just to be sure it is applied everywhere. Maybe there are other more specific (group or instance) overrides that are blocking it.
If it's not Enforced, try setting enforce option, waiting until change is rolled to all Collectors, and update Topology again.
If it was already enforced then we'd need to analyse - which hosts are the VMs on which are still discovered in error? And dive into the logs of Collector(s) monitoring those hosts.
If we need log analysis it would be best to raise a Veeam support case.
You question about "which hosts are the VMs on " gave me an hint though : most (if not all) of them VMs that are not undiscovered are on hosts that do not have a corresponding VM Discovery Container instance, which explains why the discovery can't run for these.
But the Veeam Stage 1 - Virtual Machine Containers discovery doesn't have any override set, so I'm stuck again, I can't figure why these containers are not discovered... back to raising a support case!
OK I think I got it, the root reason is that the Cluster and hosts that contained these VM don't exist anymore, so the discovery chain can't run for these VM instances.
I guess I'll have to find the right discoveries and use the almighty remove-scomclassdisabledinstances to clean everything
Thinking out loud again but I don't believe that will be as easy as I thought... since the VM Discovery Container instances for these VM don't exist anymore, I can't disable the VM discovery for these objects.
And I can't target my discovery at the upper level object (collector service) since that would undiscover all the VMs running on hosts monitored by this collector.
Maybe I could move all monitoring to another collector, do the override/remove-scomdisabledclassinstance and then load balance the collectors again
Hi Cyaz,
Right - that was my next question! Do the hosts for these VMs still exist...
If the hosts for these VMs were removed from vCenter (without first removing them from Veeam by unchecking them in our web UI) - then it could cause this problem.
In principle, a rebuild of topology should fix everything - but there can be discovery timing issues, sometimes orphaned items can remain.
As you've said, since the VM-Container objects don't exist any more - the VM discovery has no target, so it will not run and will not 'un-discover' those VMs....
One option could be, to create a 'fake' host in your Collector topology file. A host whose name matches the old host that was removed. That would force Stage1 discovery to recreate that 'host' - and then Stage2 would remove VMs for that 'host'.
It's not a very elegant solution! But it should work.
The file is OMTopology.xml and by default it is in C:\Program Files\Veeam\Veeam Virtualization Extensions for System Center\Collector\Log
You would have to manually add a topologyNode section for your 'fake' host.
To do this, I suggest just copying an existing host node from the file (a host from the same vCenter and datacenter), and changing the hostname to match the host for orphaned VMs.
The host node section will look like this -
<topologyNode id='VMHOST:HOSTNAME'
type='HostSystem'
parent='VMFOLDER:vcenter-name:group-id'
name='HOSTNAME' >
<property>......lots of properties, the contents don't matter </property>
</topologyNode>
You only need to change HOSTNAME in your pasted node, the other properties don't matter for this case.
Don't forget to include the closing </topologyNode> at the end.
Then quickly make this file read-only, or the Collector will overwrite it with correct topology information!
Now the file contains a fake host, you can run Rebuild Topology command again.
Stage 1 Discovery will add your fake host to SCOM - including a matching VM Container.
Stage 2 Discovery will then target the VM Container - and there are no VMs for this host in OMTopology, so the existing VMs should be un-discovered.
When the VMs are removed from SCOM - Remove the read-only flag from OMTopology.xml
Restart Veeam Collector service - this will recreate topology file and also rebuild topology.
Then after Stage 1 discovery runs, your fake host will be un-discovered from SCOM.
Again, it's not elegant to 'hack' topology in this way - but it does work within the normal scope of SCOM discovery mechanisms, and you don't have to run Remove-ScomDisabledClassInstance (this command can take hours to run)
However if you have any doubts or concerns about the process, please contact our support!