-
- Novice
- Posts: 4
- Liked: never
- Joined: Sep 16, 2021 5:04 pm
- Contact:
Backup Repository Transfer Optimization
Hello
Ultimately, I am looking to optimize our offsite backups to AWS S3. Currently, we use a backup job to take the VM data to a Data Domain via backup job and then from DD to our AWS Storage Gateway via a tape job. We see ~40MB/s throughput for these offsite tape jobs (bottleneck: source). This has been fine, but we are acquiring a gigabit uplink which we intend to leverage to reduce our multi-day upload times.
Seeing no way around the repository-followed-by-gateway process, we're focusing on creating a backup repo that allows us to at least meet that 125 MB/s cap. Looking through these forums, it appears that our Data Domain does not function how we want it. To that end, we've repurposed a physical server with four 2 TB HDDs in RAID5. Using Iometer, we can sustain 125 MB/s write (at 32K blocks) on the server. However, the test Veeam backup job maintains 50-70 MB/s processing with the bottleneck at the server acting as repository.
In short, is there a recommended way on how to handle this? Or is this entirely backwards?
More information:
All the physical ports in this scenario are 1 gbps. The external physical server has two NICs. We have three esxi hosts; Veeam runs on a Windows VM on one of them. The VM has two vNICs, both of which are mapped to different host NICs. In short, I don't see any actual network bottlenecks. No dropped packets or saturated links to be found.
We have tried using a SMB share, expecting to use the Multichannel feature between the VM and the Physical server storing data -- the balancing worked but topped out at around an aggregated 350 Mbps. Reviewing these forums, we also tried setting up a Veeam repository server rather than SMB share on the physical device. This resulted in comparable throughput but no NIC balancing.
If nothing comes up here, we're looking at reformatting to RAID10 for the increased write speed. Once again, if there's a better approach to utilizing Veeam for the task, please do share and link documentation.
Thanks
Ultimately, I am looking to optimize our offsite backups to AWS S3. Currently, we use a backup job to take the VM data to a Data Domain via backup job and then from DD to our AWS Storage Gateway via a tape job. We see ~40MB/s throughput for these offsite tape jobs (bottleneck: source). This has been fine, but we are acquiring a gigabit uplink which we intend to leverage to reduce our multi-day upload times.
Seeing no way around the repository-followed-by-gateway process, we're focusing on creating a backup repo that allows us to at least meet that 125 MB/s cap. Looking through these forums, it appears that our Data Domain does not function how we want it. To that end, we've repurposed a physical server with four 2 TB HDDs in RAID5. Using Iometer, we can sustain 125 MB/s write (at 32K blocks) on the server. However, the test Veeam backup job maintains 50-70 MB/s processing with the bottleneck at the server acting as repository.
In short, is there a recommended way on how to handle this? Or is this entirely backwards?
More information:
All the physical ports in this scenario are 1 gbps. The external physical server has two NICs. We have three esxi hosts; Veeam runs on a Windows VM on one of them. The VM has two vNICs, both of which are mapped to different host NICs. In short, I don't see any actual network bottlenecks. No dropped packets or saturated links to be found.
We have tried using a SMB share, expecting to use the Multichannel feature between the VM and the Physical server storing data -- the balancing worked but topped out at around an aggregated 350 Mbps. Reviewing these forums, we also tried setting up a Veeam repository server rather than SMB share on the physical device. This resulted in comparable throughput but no NIC balancing.
If nothing comes up here, we're looking at reformatting to RAID10 for the increased write speed. Once again, if there's a better approach to utilizing Veeam for the task, please do share and link documentation.
Thanks
-
- Product Manager
- Posts: 14836
- Liked: 3082 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Backup Repository Transfer Optimization
Hello,
and welcome to the forums.
A server with internal disks is a good design. I would probably put everything Veeam related on that server to avoid any chicken-egg issues. I would also go for RAID6 for safety reasons. The question is, how many 2TB disks you have and whether a proper RAID controller is in place (with battery cache). I assume that you have REFS in place.
What does the bottleneck analysis say? 99% source for every job?
The 3 ESXi hosts use shared storage? Or are they standalone? I'm asking because I'm interested in the backup mode (NBD or Hot-Add). ESXi version (because older versions have NBD speed limits).
Best regards,
Hannes
and welcome to the forums.
A server with internal disks is a good design. I would probably put everything Veeam related on that server to avoid any chicken-egg issues. I would also go for RAID6 for safety reasons. The question is, how many 2TB disks you have and whether a proper RAID controller is in place (with battery cache). I assume that you have REFS in place.
so this is the only proxy machine you have? If yes, how many tasks does that machine have configured? Is the physical server registered as proxy?Veeam runs on a Windows VM on one of them
What does the bottleneck analysis say? 99% source for every job?
The 3 ESXi hosts use shared storage? Or are they standalone? I'm asking because I'm interested in the backup mode (NBD or Hot-Add). ESXi version (because older versions have NBD speed limits).
Best regards,
Hannes
-
- Novice
- Posts: 4
- Liked: never
- Joined: Sep 16, 2021 5:04 pm
- Contact:
Re: Backup Repository Transfer Optimization
Thanks! Long time lurker and all that...
The situation has changed a bit. After reading your response, I made a couple changes. Those include setting up RAID 10 and ReFS. Between that and setting up the managed server again, everything seemed to be running smoothly. Except this is with a 95-99% network bottleneck because Veeam would only use one NIC/vNIC.
But, since then, we've found that the old server with new hard drives routinely rejects one at random approximately every five days. Since there's no documentation to support it, we've moved physical servers. The setup is identical except now we only have two 2 TB drives in RAID1.
Now the bottleneck breakdown is: Source 52% > Proxy 53% > Network 68% > Target 96%. Admittedly, the backup job throughput is 140-160 MB/s thanks to the compression.
Both the VM and the physical server are Proxy. The VM is set to 4 concurrent tasks on three cores. The physical device is set to 6 tasks on 6 cores.
The backup mode is hot-add in all cases except the backup for the Veeam VM itself.
Thanks for your suggestions so far as they've greatly improved our throughput. But is there a way to get past this single-NIC utilization issue?
Thanks!
The situation has changed a bit. After reading your response, I made a couple changes. Those include setting up RAID 10 and ReFS. Between that and setting up the managed server again, everything seemed to be running smoothly. Except this is with a 95-99% network bottleneck because Veeam would only use one NIC/vNIC.
But, since then, we've found that the old server with new hard drives routinely rejects one at random approximately every five days. Since there's no documentation to support it, we've moved physical servers. The setup is identical except now we only have two 2 TB drives in RAID1.
Now the bottleneck breakdown is: Source 52% > Proxy 53% > Network 68% > Target 96%. Admittedly, the backup job throughput is 140-160 MB/s thanks to the compression.
Both the VM and the physical server are Proxy. The VM is set to 4 concurrent tasks on three cores. The physical device is set to 6 tasks on 6 cores.
The backup mode is hot-add in all cases except the backup for the Veeam VM itself.
Thanks for your suggestions so far as they've greatly improved our throughput. But is there a way to get past this single-NIC utilization issue?
Thanks!
-
- Product Manager
- Posts: 14836
- Liked: 3082 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Backup Repository Transfer Optimization
Hello,
good to hear.
Network load balancing is usually usually depends on which load balancing algorithm is used. With multiple virtual proxies, there should be load balancing (assuming LACP with MAC / IP based load balancing).
Best regards,
Hannes
good to hear.
Network load balancing is usually usually depends on which load balancing algorithm is used. With multiple virtual proxies, there should be load balancing (assuming LACP with MAC / IP based load balancing).
Best regards,
Hannes
-
- Novice
- Posts: 4
- Liked: never
- Joined: Sep 16, 2021 5:04 pm
- Contact:
Re: Backup Repository Transfer Optimization
Great! Is that as simple as setting up another proxy on the same server? I had been following this post from 2017. However, from the VM console I tried installing a second managed server entry (to install the proxy role) on the physical server and Veeam refused. It warned (correctly?) that the server I was supplying under a different name was already registered.
Additionally, the Windows VM requires the NIC teaming to be "Switch independent" and "Address Hash."
Do you have any documentation to link to or some advice as to why Veeam seems to have stopped me from doing what the post above suggested? This seems to be a task that many would have tackled before.
Additionally, the Windows VM requires the NIC teaming to be "Switch independent" and "Address Hash."
Do you have any documentation to link to or some advice as to why Veeam seems to have stopped me from doing what the post above suggested? This seems to be a task that many would have tackled before.
-
- Product Manager
- Posts: 14836
- Liked: 3082 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Backup Repository Transfer Optimization
my suggestion was create a new Windows / Linux VM and add that as proxy.
it makes no sense to add the proxy role again to an existing proxy. that's why the wizard prevents that. the post from 2017 is using multiple proxies as suggested also by me.
I never tried NIC-teaming on VMs. Sounds too complicated for me - no recommendation from my side how to do that.
it makes no sense to add the proxy role again to an existing proxy. that's why the wizard prevents that. the post from 2017 is using multiple proxies as suggested also by me.
I never tried NIC-teaming on VMs. Sounds too complicated for me - no recommendation from my side how to do that.
-
- Novice
- Posts: 4
- Liked: never
- Joined: Sep 16, 2021 5:04 pm
- Contact:
Re: Backup Repository Transfer Optimization
That makes perfect sense then. I missed the keyword "virtual" from your reply. I look forward to applying this.
Thanks
Thanks
Who is online
Users browsing this forum: d.artzen, lando_uk, orb and 171 guests