Host-based backup of VMware vSphere VMs.
Post Reply
dvdn61
Influencer
Posts: 11
Liked: 2 times
Joined: May 01, 2019 2:10 pm
Full Name: Duco van der Nol
Contact:

Scale out repository problems

Post by dvdn61 »

We have recently obtained a Dell Powervault on which we created 2 ReFS volumes with a size of 128 TB each (this is the maximum volume size).
We decided to add these volumes to a new scale-out repo for easier management and to cater for future growth.
We use per-vm backups.

We added our old storage to the scale-out repo and placed repo's in seal mode in order to start migration to the new volumes. This we did in batches, in order to make the transition as smooth as possible.
In the beginning we saw that jobs were placed out evenly over the volumes. We projected both volumes to fill up to about 70-75% leaving enough space for at least a year's growth. Part of our data consists of dump volumes (vmdk's) from Oracle databases, which is mostly unique data requiring much space. What happened is that one volume filled up very quickly, while the other was less than 50% full.
But now with one full disk all new restore points are being created on the other disk, filling this up very quickly as well. What worsens the problems is that a full backup needs to be created when changing over to another volume, eating up space even quicker. We maintain 2 weeks of retention with weekly synthetic fulls, so it will take 3 weeks before the expired data can be deleted from the full disk.
In the mean time all our empty space is consumed at a fast pace, due to the fact that vm's are being relocated to the other volume. Have more people experienced this behavior? We are seriously reconsidering scale-out at the moment, as this is an unworkable situation. Any suggestions?
Mildur
Product Manager
Posts: 9716
Liked: 2565 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Scale out repository problems

Post by Mildur »

Hi Duco

If previous backup chains are on a single extend, and you have enabled the "data locality", all backups will write to the same extend. This is expected.

Have you migrated the data with V11 or V12? V11 would have moved the files without our FastClone technology.
You could use the new rebalance feature in V12. The rebalance option will distribute the backup files FastClone aware over all extends. If you have previously migrated the data with V11, there is a chance to reclaim more storage when using rebalancing.

Best,
Fabian
Product Management Analyst @ Veeam Software
dvdn61
Influencer
Posts: 11
Liked: 2 times
Joined: May 01, 2019 2:10 pm
Full Name: Duco van der Nol
Contact:

Re: Scale out repository problems

Post by dvdn61 »

Hi Fabian,

Thanks for your answer. We are currently using v11. And we have enable "data locality", since the documentation states that this is the preferred setting for ReFS. And even with "performance" I am not sure we would have avoided the current problems. As soon as one volume fills up, all it's full backups have to take place on another volume, so data would still be migrated, resulting in lots of migrations to the remaining volume and thus filling it up.
Mildur
Product Manager
Posts: 9716
Liked: 2565 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Scale out repository problems

Post by Mildur »

Hello Duco
And we have enable "data locality", since the documentation states that this is the preferred setting for ReFS
Yes. All Backup chains for a single VM must be on the same extend. If not, you won't be able to leverage FastClone.

Are you using Per-machine backup chains? I'm pretty sure that you can solve your storage issue with rebalancing in V12. VMs will be redistributed over both extends with FastClone support. Meaning you have now 3x full backup files which requires 3x the entire space on your storage.
When the backup chain for this VM is moved to another extend, it will be only approximately 1-1.5x entire space for all 3 full backup files.

Best,
Fabian
Product Management Analyst @ Veeam Software
Post Reply

Who is online

Users browsing this forum: No registered users and 37 guests