Hello,
I'm in the process of provisioning a Windows Server 2016 file server to migrated data that is in an 8TB LUN at the moment. It is being migrated from another domain but we are under pressure to keep the file structure as is. On the old server the VM has all the VMDKs set up as a dynamic disk within Windows. I recall reading articles advising against dynamic disks etc. This means that if I were to create a single large VMDK file of 8TB Veeam would take an incredibly long time to backup that VMDK as it's limited by the 1-core-and-2/4GB-RAM-per-VMDK limitation.
It was suggested that we create multiple vmdk files and mount these as folders on one volume so that the structure would be the same but the underlying folders would reside on different disk volumes (vmdks). It is a bit confusing but could be a potential solution.
Would you advise against this and why?
Also, would this complicate the Veeam ONE reporting in terms of available disk space?
-
- Enthusiast
- Posts: 82
- Liked: 1 time
- Joined: Apr 28, 2015 7:52 am
- Contact:
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
Re: Solution to improve backup speed for large VMs
From my memory as well, use of Dynamic disks in VMs of multiple VMDK was for a long time explicitly not a supported VMware configuration. Not sure if that has changed.I recall reading articles advising against dynamic disks etc.
Phrasing this limitation like this sounds a bit misleading. It is true that in our environment we get overall better performance when there are multiple Proxies performing multiple tasks, such as backups of multiple VMs with multiple VMDK disks. However, a single proxy performing a single task(such as backing up a single VMDK), in our environment, is still very very fast, because our back end storage is fast, our target storage is fast, and the Network is fast.This means that if I were to create a single large VMDK file of 8TB Veeam would take an incredibly long time to backup that VMDK as it's limited by the 1-core-and-2/4GB-RAM-per-VMDK limitation.
So automatically say there is any per-VMDK limitation is not correct. That ratio you mention is a best practice for maximizing performance, not any implied ceiling. You should test your scenario.
-
- Veteran
- Posts: 370
- Liked: 97 times
- Joined: Dec 13, 2015 11:33 pm
- Contact:
Re: Solution to improve backup speed for large VMs
Could you not break up the top level folders and use DFS to rebuild it into a single share structure again? Iv'e done this multiple times to break up large drives with success for file servers. DFS also means you can move data around easily and your next file server upgrade doesn't require and client reconfiguration
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
Re: Solution to improve backup speed for large VMs
Seconding this recommendation for general file sharing share practices.DFS also means you can move data around easily and your next file server upgrade doesn't require and client reconfiguration
For what it's worth to OP, our organization manages more than 200TB of file share data using DFS to present a unified namespace and it works well. Individual shares stay under 16TB (for now), but users see one big unified tree.
Splitting it earlier when it's a relatively small server is easier than later when it's a larger server.
-
- Veeam Vanguard
- Posts: 395
- Liked: 169 times
- Joined: Nov 17, 2010 11:42 am
- Full Name: Eric Machabert
- Location: France
- Contact:
Re: Solution to improve backup speed for large VMs
+1
You should use multiple disk.
4x2TB disks will backup way faster thant one single 8TB due to parallel processing (if your proxy has more than one core )
having smaller parts is a key factor for operational success.
You should use multiple disk.
4x2TB disks will backup way faster thant one single 8TB due to parallel processing (if your proxy has more than one core )
having smaller parts is a key factor for operational success.
Veeamizing your IT since 2009/ Veeam Vanguard 2015 - 2023