However, I noticed today that a few of the jobs had well over 100 restore points! Looking into it, I noticed that all of the daily incrementals were being kept on disk for months. The common theme between these jobs appears to be that they all contained at least one VM that had been decommissioned and removed from the primary source job that feeds the backup copy. The VMs were removed right around the time that the incrementals started being kept indefinitely.
So, via the Veeam Console, I disabled the backup copy job, deleted the remaining backup copies of these old VMs, and re-enabled the backup copy job. Currently, it appears to be creating a restore point for each day of the backlog, and cleaning up the old restore points that it shouldn't have been retaining (the # of restore points is decreasing as it goes).

It looks like manually deleting the old backup copies is going to fix the issue, but can anyone explain why this happened, or most importantly...
What are the best practice steps for removing an old VM from a primary backup job and associated backup copy jobs?
It would be nice to be able to save the old copies for a while just in case we need to go back to them, but I also don't want all the other VMs in the job to start "over-retaining". Thanks!