Discussions related to using object storage as a backup target.
Post Reply
backupquestions
Expert
Posts: 186
Liked: 21 times
Joined: Mar 13, 2019 2:30 pm
Full Name: Alabaster McJenkins
Contact:

How to best use existing space?

Post by backupquestions »

I have a physical server with both internal disks but also an external direct sas connected tray with even more disks. In Windows, this is formatted into two different drive letters refs 64k block size. So far I made one repo on each drive letter, using per vm files.

Well, then I made a SOBR and utilized one of these two repos as my only extent. I added S3 scale out storage for it. My initial plan was only to put a few large VM backup copy jobs that would go to this drive with GFS enabled, so that old GFS points would go out to S3.

Now I'm thinking that with new VM's being added to the company and data storage is going to start going up again... I'm going to end up needing more space.

So I thought maybe add the other of the two repos into the SOBR and let things age out here too, as I do take a weekly synthetic full on those jobs so that should seal the chain for them. Below are my potential issues I may run into if I do this..... What do you think?

1. The large vms are in a regular backup job on the first extent already, and on the backup copy job to the other extent. So if I do add the first extent to the SOBR, it would now be offloading those very same large VMs a second time to S3 so there would be two full vbk out there for those ones, and one is based on GFS from backup copy job while one is based on initial full of regular backup job.... OR is cloud offloading smart enough to look at data across any local extents/jobs and determine not to send more than one full for same VM? I do know that it does this kind of thing per job or per vm but this is kind of different and more elaborate maybe.

2. All of our VM short term chain data is copied separately out to a VCC provider as backup copy job and we have high change rates on some large vms. Now if we do this offloading process with those particular VMs, I fear that their data will be transmitted to both VCC cloud and S3 cloud each day and even changed blocks is too high for our bandwidth, even with good bandwidth of 1gb. So if this ends up being a problem, I thought maybe exclude those VMs from going out to the S3 but the offloading is a global process for the entire SOBR so I can't do that...
HannesK
Product Manager
Posts: 14319
Liked: 2890 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: How to best use existing space?

Post by HannesK »

Hello,
it would now be offloading those very same large VMs a second time to S3
correct

2. correct, you would need an extra repository without capacity tier for that.

Best regards,
Hannes
Post Reply

Who is online

Users browsing this forum: No registered users and 17 guests