- Service Provider
- Posts: 15
- Liked: never
- Joined: Apr 13, 2017 10:28 am
- Full Name: Owen Wright
Im looking for a bit of advise on a ground up build for VMware / Veeam / SAN backup infrastructure.
As a bit of background, we are currently running VMware clusters, Veeam 9.5 u3 backups (reverse incs) with a Veeam B&R server VM, and a Veeam Proxy physical server which is directly connected to the VM cluster storage via FC/iSCSI for reading data. The Veeam Backup Proxy server also has a seperate SAN which is exclusively used for Veeam backups, and nothing else. These typically have 4-8 spindles although only 7200RPM Nearline, ranging in 4-20TB in capacity depending on cluster/site. Backup jobs are configured as App aware, and reverse incrementals, with mostly all other defaults left untouched.
I have the option to rebuild the backup infrastructure from the ground up, and with it build a best practice around our deployments of Veeam and related backup storage, so would like to gather informed opinions on how build the best solution available. My goal is to optimally tune the SAN disk groups / stripe size to best match Veeam application requirements, and also tune Veeam to make best use of storage etc, as well as selecting optimal approach to backups within Veeam and any associated settings within jobs.
I plan to retain the VM based B&R Server to manage jobs, as well as the physical proxy server to leverage heavy lifting.
The connectivity between the Veeam B&R Server & Veeam Proxy Physical server is 1GB Ethernet.
Im thinking of setting the VM backups job, Storage optimisation to best suit storage segment/stripe sizing.
Assuming the following optimisation settings (ignoring deduplication tuning)
Storage OptimisationResulting Block Size
Would I be better configuring jobs with 1MB (local target) and SAN stripe widths at or above 1MB?
Any input on this would be appreciated, Im looking for the most optimal settings for throughput.
On a seperate point, with regards to tuning the time taken to complete the act of capturing the backup data from the servers, a forward incremental would be best placed?
I always preferred reverse incs, historically, however after various restores of VMs from the latest reverse, and from older points on reverse backups, I see these take a little longer to restore older points, but never to any point where I find this a problem, so an genuinely considering forwards, moving forward - no pun intended
I dont have any issue with Veeam performing maintenance after the act of getting backup data, as this leaves VMs and that storage untouched, thus unimpacted after that point.
Im also considering GFS assuming I can keep all this on disk, with 7 days, 4 weeks, 12 monthly.
again this would depend on all of the above hardware with no tape etc.
Ill leave out any reference to number of restore points and secondary backups to DR / tape etc. suffice to say, I anticipate 20-30 restore points (disk) depending on site.
I want to get the best possible tuning and performance possible here. And yes, I know more spindles would be better, where possible that will happen, but Im not lucky enough to have unlimited budget
If you spot any obvious errors here please let me know.
- VP, Product Management
- Posts: 4801
- Liked: 944 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
thanks for the questions. Hope I can help with some of your questions.
The Block Size has many positiv and negative effects.
In general, the bigger the block the less metadata Veeam need to handle, the better the backup performance and RAM usage on Proxy is reduced. But Veeam dedup is reduced and Restore speed can be affected badly, specifically if you do FLR/IVMR/Explorers.
Small block sizes lead to the opposite.
WAN target was once introduced to optimize the WAN speed a bit, but it was usefull only in corner cases.
I would always go with 1MB as default. 16TB+ is 4MB since some time. Use it only if you have Veeam Full Backup Files bigger than 8TB (16TB source VM data) or if you work with dedup appliances and the best practices show this setting as best practices.
Overall if you backup a 1MB data block.. it will end up in a smaller block on the storage after compression/deduplication. So on the storage side it will land as 64KB-512KB block. Depending on the storage bet practices usually the storage block size is 2x the workload size for non alligned data to have every second IO on a single storage block. But as today the storage systems are virtualized internally and a lot of other optimizations will be done it is hard to tell which block size would work best. This is different at any vendor. But in General a more bigger block size is used. To look at the best practices for Exchange 2016 in case of block sizes is maybe a good start point.
Reverse Incremental is our oldest most matured backup format, so it is a save bank.
However in the last years, customer move away from it to Forever Forward Incremental, as the VM Snapshot lifetime is shorter (less IO to commit the VM snapshots) and the Proxy Task slots are freed up more early. Overall this allow most customers to achieve a shorter backup window.
Users browsing this forum: No registered users and 9 guests