-
- Novice
- Posts: 8
- Liked: 2 times
- Joined: Sep 06, 2018 4:15 am
- Full Name: Steve Harris
- Contact:
VEEAM performance backup/restore of TB sized VHDX files
Hi All,
My Organization is planning to migrate to VEEAM VBR 12.2 in the near future.
Currently we have 10 x Intra-state HPE Proliant DL38 Gen11 Servers with the Hyper-V Role installed, 10Tb each site, 100Tb total data.
Each site is running ARCSERVE with a standalone LTO8 tape drive.
Each HPE Proliant Server has a DC/File/Print Server VM with the E:\ data drive typically being 10Tb (Microsoft Office,PDFs, video etc)
E:\ drive is a fixed size virtual hard disk (15Tb VHDX)
QUESTION:
From a VEEAM perspective is it advisable to keep our VHDX files as small as possible and expand as required?
Apparently VEEAM exhibits slow backups and restorations with vDisks over 10GB.
If anyone can offer some field experience/expertise in this area, it would be appreciated!
With VEEAM, we plan to use a GFS hybrid disk/tape solution, e.g.:
Intra-state (Local) 40Tb VEEAM Repositories
Initial VM Seed Active Full Backup to VBK on local repo, copied to USB disk, transported/imported to Central Repo 400Tb
Local repo, Monday, Tuesday, Wednesday, Thursday incremental backups to VIB then replicated to Central Site Repo 400Tb
Local repo, VIBs to Friday Synthetic Fulls (52 week RPO)
Central Repo
VIBs to Friday Synthetic Fulls (52 week RPO)
End of Month 100Tb backup
All Friday Synthetic Fulls then Active Full Backup to HPE MSL3040 x 4 LTO9 drives
Repeat this cycle
Thanks!
My Organization is planning to migrate to VEEAM VBR 12.2 in the near future.
Currently we have 10 x Intra-state HPE Proliant DL38 Gen11 Servers with the Hyper-V Role installed, 10Tb each site, 100Tb total data.
Each site is running ARCSERVE with a standalone LTO8 tape drive.
Each HPE Proliant Server has a DC/File/Print Server VM with the E:\ data drive typically being 10Tb (Microsoft Office,PDFs, video etc)
E:\ drive is a fixed size virtual hard disk (15Tb VHDX)
QUESTION:
From a VEEAM perspective is it advisable to keep our VHDX files as small as possible and expand as required?
Apparently VEEAM exhibits slow backups and restorations with vDisks over 10GB.
If anyone can offer some field experience/expertise in this area, it would be appreciated!
With VEEAM, we plan to use a GFS hybrid disk/tape solution, e.g.:
Intra-state (Local) 40Tb VEEAM Repositories
Initial VM Seed Active Full Backup to VBK on local repo, copied to USB disk, transported/imported to Central Repo 400Tb
Local repo, Monday, Tuesday, Wednesday, Thursday incremental backups to VIB then replicated to Central Site Repo 400Tb
Local repo, VIBs to Friday Synthetic Fulls (52 week RPO)
Central Repo
VIBs to Friday Synthetic Fulls (52 week RPO)
End of Month 100Tb backup
All Friday Synthetic Fulls then Active Full Backup to HPE MSL3040 x 4 LTO9 drives
Repeat this cycle
Thanks!
-
- Veeam Software
- Posts: 2721
- Liked: 628 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: VEEAM performance backup/restore of TB sized VHDX files
Hi Steve,
There are no limitations with regards to performance on the disk size specifically related to Veeam. I'm not quite sure where you read that there was a limit to 10 GiB (maybe it was 100?), but there is no expected performance drop off due to the size of the VHDX.
Processing rate for backup and restore will be largely dependent on the backup infrastructure resources (Hyper-V hosts, Repository, Network), and I've worked with many clients heavily invested in HyperV that had significantly larger than 10 TiB disks and had quite great backup speeds.
So no need to re-arrange your production VMs based on what I'm reading, I don't anticipate any issues based just on your description alone.
Just to confirm, your plan here is:
1. Primary backup to a local repository
2. Create a Backup Copy Seed and copy the seed to your central site.
3. Create a new Backup Copy Job and map the Backup Copy job to the seeded backup from step 2
4. Create a Backup to Tape job to tape-out the backups
All looks pretty normal if that's the case.
For your tape devices though, you have multiple sites each with its own tape drive, or the tape drives are all in a central location? (Basically, where will the tape-out occur? On each local site or from a central site?) Reason I ask is just because you will want to make sure that if the drives are geographically distributed (e.g., each local site has its own drive and local backups), you will need to ensure the tape jobs only have data that is local to each site, else you might get cross-WAN traffic. So if you have three sites A, B, and C all with their own tape devices, then you would want 3 tape jobs with only jobs from the same site added as a source.
There are no limitations with regards to performance on the disk size specifically related to Veeam. I'm not quite sure where you read that there was a limit to 10 GiB (maybe it was 100?), but there is no expected performance drop off due to the size of the VHDX.
Processing rate for backup and restore will be largely dependent on the backup infrastructure resources (Hyper-V hosts, Repository, Network), and I've worked with many clients heavily invested in HyperV that had significantly larger than 10 TiB disks and had quite great backup speeds.
So no need to re-arrange your production VMs based on what I'm reading, I don't anticipate any issues based just on your description alone.
Just to confirm, your plan here is:
1. Primary backup to a local repository
2. Create a Backup Copy Seed and copy the seed to your central site.
3. Create a new Backup Copy Job and map the Backup Copy job to the seeded backup from step 2
4. Create a Backup to Tape job to tape-out the backups
All looks pretty normal if that's the case.
For your tape devices though, you have multiple sites each with its own tape drive, or the tape drives are all in a central location? (Basically, where will the tape-out occur? On each local site or from a central site?) Reason I ask is just because you will want to make sure that if the drives are geographically distributed (e.g., each local site has its own drive and local backups), you will need to ensure the tape jobs only have data that is local to each site, else you might get cross-WAN traffic. So if you have three sites A, B, and C all with their own tape devices, then you would want 3 tape jobs with only jobs from the same site added as a source.
David Domask | Product Management: Principal Analyst
-
- Novice
- Posts: 8
- Liked: 2 times
- Joined: Sep 06, 2018 4:15 am
- Full Name: Steve Harris
- Contact:
Re: VEEAM performance backup/restore of TB sized VHDX files
Hi David,
Thanks for the reply and clarifications. I didn't think there would be an issue with large VHDX files.
RE: For your tape devices though, you have multiple sites each with its own tape drive, or the tape drives are all in a central location?
We have multiple sites each with it's own tape drive.
As we proceed with onboarding each site, we phase out/uninstall the ARCSERVE/standalone tape drive solution.
Thanks for checking the plan.
Deployment wise, we will target one of the 10 sites for the POC, e.g.
Deployment of 1 x additional HPE Proliant DL380 Gen11 for the Local Repo (43Tb disk RAID6, Linux)
Installed on the Hyper-V Host running the DC/File/Print will be the VEEAM Data Mover for backup to the Local Repo.
Central Site Repo will consist of 1 x HPE Alletra with 400Tb, 10GB Ethernet and 1 x HPE MSL 3040 with 4 x LTO9 drives)
WAN is 50-00Mbits intra-state.
Goal is to minimise WAN traffic, so only VIBs traversing the WAN, all Synthetic Full backups done on the local/Central Site Repo.
VEEAM will have the final word with ensuring best practices etc
Steve
Thanks for the reply and clarifications. I didn't think there would be an issue with large VHDX files.
RE: For your tape devices though, you have multiple sites each with its own tape drive, or the tape drives are all in a central location?
We have multiple sites each with it's own tape drive.
As we proceed with onboarding each site, we phase out/uninstall the ARCSERVE/standalone tape drive solution.
Thanks for checking the plan.
Deployment wise, we will target one of the 10 sites for the POC, e.g.
Deployment of 1 x additional HPE Proliant DL380 Gen11 for the Local Repo (43Tb disk RAID6, Linux)
Installed on the Hyper-V Host running the DC/File/Print will be the VEEAM Data Mover for backup to the Local Repo.
Central Site Repo will consist of 1 x HPE Alletra with 400Tb, 10GB Ethernet and 1 x HPE MSL 3040 with 4 x LTO9 drives)
WAN is 50-00Mbits intra-state.
Goal is to minimise WAN traffic, so only VIBs traversing the WAN, all Synthetic Full backups done on the local/Central Site Repo.
VEEAM will have the final word with ensuring best practices etc
Steve
-
- Veeam Software
- Posts: 2721
- Liked: 628 times
- Joined: Jun 28, 2016 12:12 pm
- Contact:
Re: VEEAM performance backup/restore of TB sized VHDX files
Happy to advise Steve, and thank you for the clarifications. So the tape-out will occur at the central location and use the Backup Copies that send backups from the remote sites to the central location. Central location will have Veeam Backup and Replication installed, and you will simply add each remote site one-by-one and manage it all through the central location-- am I correct?
If so, sounds pretty normal and reasonable, and no concerns.
If so, sounds pretty normal and reasonable, and no concerns.
David Domask | Product Management: Principal Analyst
-
- Novice
- Posts: 8
- Liked: 2 times
- Joined: Sep 06, 2018 4:15 am
- Full Name: Steve Harris
- Contact:
Re: VEEAM performance backup/restore of TB sized VHDX files
That is correct, David, tape-out at the Central Site, Remote Local Repos use forward incremental backups, which are sent from each site, Monday to Thursday to the Central Repo location.
Central Location will have a VM dedicated to VBR 12.2, another VM dedicated to VEEAM One. We add each remote site one by one, all managed via the VEEAM console in the VBR VM.
Central Location will have a VM dedicated to VBR 12.2, another VM dedicated to VEEAM One. We add each remote site one by one, all managed via the VEEAM console in the VBR VM.
Who is online
Users browsing this forum: No registered users and 18 guests