-
- Influencer
- Posts: 17
- Liked: 7 times
- Joined: Apr 18, 2012 6:55 pm
- Full Name: Jari Haikonen
- Contact:
Dell EMC DD, best practices, do's and dont's - ATM
Hi
So after this refreshing ReFS adventure, we are planning to move to DD, to Virtual Edition to be precise (since the licensing & hw gives better options for fiddling with the costs as we run this as 'service' to our internal 'customers').
And now I ask from you guys, who have been running this for longer time, that what are the best options that you have yourself witnessed? This is what I was planning
- 30d local backup to local DDve onsite (Mtree_30d)
- Backup copy job with GFS options (4 weeks, 12 months, 10 years) to local DDve on site (Mtree_LTS)
- Replication from local DD _LTS to remote DD _LTS
And as for settings, Primary backup job
- Incremental, create synthetic full periodically
- No active full backups (should I?)
- Dunno yet about Health check & Defragment - Do I need this and active full both or just either to be "on the good side"?
- No inline deduplication on Job - I think that I'll let DD dedupe to handle this (although horror stories from LARGE SQL backup restore times seem.. BAD)
- Compresion level: None
- STorage optimazation: Local Target (16TB+ backup files, although ours range from 500GB to 5TB)
Backup copy job
- Synthesizing from incrments (Did NOT tick the "Read the entire restore point..." checkbox)
- No health check
- No Defragment
- No deduplication
- Compression: None
DD Repository settings @ Veeam
- Decompress (I guess I could untick this since I'm not compressing and if I am then tick?)
- Use per-VM Backup files (although 90% of my jobs backup single VM)
Also if someone knows how DD replication works, I have a question concerning that too. Last night I tested replication with transformed weekly full, and it seemed to take lot more time than I thought (not bad still, 100Mbps line, 1h for 1TB). I was in the impression, that replication only sends the differential blocks to the offsite DD with replication. Local DD stored only ~6GB differential data, pre-comp data showed 2.2TB (I guess it took the .temp that Veeam made and the .vbk that it synthesized), physical written something alongside the 7GB - so how did the replication end up taking 2hours in total, should it not just send the 7GB over the line and be.. fast? calculated the speed to be something like 300MB/s so over 100Mbps line that is still fast though, works for us. Replication is set as MTree Automatic replication from DD to another. (1TB / 1hr = ~300MB/s, sometimes it went 500MB/s)
So after this refreshing ReFS adventure, we are planning to move to DD, to Virtual Edition to be precise (since the licensing & hw gives better options for fiddling with the costs as we run this as 'service' to our internal 'customers').
And now I ask from you guys, who have been running this for longer time, that what are the best options that you have yourself witnessed? This is what I was planning
- 30d local backup to local DDve onsite (Mtree_30d)
- Backup copy job with GFS options (4 weeks, 12 months, 10 years) to local DDve on site (Mtree_LTS)
- Replication from local DD _LTS to remote DD _LTS
And as for settings, Primary backup job
- Incremental, create synthetic full periodically
- No active full backups (should I?)
- Dunno yet about Health check & Defragment - Do I need this and active full both or just either to be "on the good side"?
- No inline deduplication on Job - I think that I'll let DD dedupe to handle this (although horror stories from LARGE SQL backup restore times seem.. BAD)
- Compresion level: None
- STorage optimazation: Local Target (16TB+ backup files, although ours range from 500GB to 5TB)
Backup copy job
- Synthesizing from incrments (Did NOT tick the "Read the entire restore point..." checkbox)
- No health check
- No Defragment
- No deduplication
- Compression: None
DD Repository settings @ Veeam
- Decompress (I guess I could untick this since I'm not compressing and if I am then tick?)
- Use per-VM Backup files (although 90% of my jobs backup single VM)
Also if someone knows how DD replication works, I have a question concerning that too. Last night I tested replication with transformed weekly full, and it seemed to take lot more time than I thought (not bad still, 100Mbps line, 1h for 1TB). I was in the impression, that replication only sends the differential blocks to the offsite DD with replication. Local DD stored only ~6GB differential data, pre-comp data showed 2.2TB (I guess it took the .temp that Veeam made and the .vbk that it synthesized), physical written something alongside the 7GB - so how did the replication end up taking 2hours in total, should it not just send the 7GB over the line and be.. fast? calculated the speed to be something like 300MB/s so over 100Mbps line that is still fast though, works for us. Replication is set as MTree Automatic replication from DD to another. (1TB / 1hr = ~300MB/s, sometimes it went 500MB/s)
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
Re: Dell EMC DD, best practices, do's and dont's - ATM
Hi,
Have you performance tested every scenario you expect to encounter during regular operations, using your specific virtual machine configurations? I unfortunately failed to do so when selecting a dedupe vendor (HPE StoreOnce) and ran into issues which support told me will not be fixed any time soon.
I would encourage you to read in full this KB article: https://www.veeam.com/kb1956
In particular this section:
Have you performance tested every scenario you expect to encounter during regular operations, using your specific virtual machine configurations? I unfortunately failed to do so when selecting a dedupe vendor (HPE StoreOnce) and ran into issues which support told me will not be fixed any time soon.
I would encourage you to read in full this KB article: https://www.veeam.com/kb1956
In particular this section:
In my environment, using HPE StoreOnce, we ran into a performance problem where any VM which uses more than one VMDK file, no matter which restore method we used, the restore process was 1/20th the speed we experienced with a single VMDK file. This behavior because of the way Veeam backup files are formatted, and how Deduplication appliances perform when under high amounts of random reads, the two problems compound themselves to be 1/10 - 1/20 the speed of restoring a virtual machine of a single VMDK. Business wise we are still dealing with the fallout of this limitation, that it will not be fixed any time soon. Of course, your mileage may vary.For quick recovery you may consider using fast primary storage and keeping a several restore points (3-7) for quick restore operations such as Instant Recovery, SureBackup, Windows or Other-OS File restores since they generate the highest amount of random reads. Then use the DataDomain as a secondary storage to store files for long term retention.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Dell EMC DD, best practices, do's and dont's - ATM
You should try Veeam B&R 9.5 U4, since it introduces particular StoreOnce-related improvements in this regard.evilaedmin wrote:This behavior because of the way Veeam backup files are formatted, and how Deduplication appliances perform when under high amounts of random reads, the two problems compound themselves to be 1/10 - 1/20 the speed of restoring a virtual machine of a single VMDK. Business wise we are still dealing with the fallout of this limitation, that it will not be fixed any time soon.
-
- Expert
- Posts: 176
- Liked: 30 times
- Joined: Jul 26, 2018 8:04 pm
- Full Name: Eugene V
- Contact:
Re: Dell EMC DD, best practices, do's and dont's - ATM
Our SE had mentioned that this improvement was not guaranteed for 9.5 U4 GA, has this changed? Looking forward to it.You should try Veeam B&R 9.5 U4, since it introduces particular StoreOnce-related improvements in this regard.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Dell EMC DD, best practices, do's and dont's - ATM
I'm not sure what improvement you've discussed with our SE, but U4 does address some performance issues, please wait until the official announcement for details.
-
- Enthusiast
- Posts: 70
- Liked: 8 times
- Joined: May 09, 2012 12:52 pm
- Full Name: Stefan Holzwarth
- Contact:
Re: Dell EMC DD, best practices, do's and dont's - ATM
I recommend a different setup:
- backup to local disk 7 d (fast restorespeed for your servers)
- copyjob to local DDve with 30d,4w, 12m, 10y
read for daily goes to local disk instead to DDve, so faster processing
also I would use "Read the entire restore point from source...." since you are local DDve is not good in reading a lot
- instead replication use boost to copy to offsite (we have good results in using boost) - so veeam knows about the copies
- as an option you can use snapshots (max age 3 d) on offsite DDve to be save against veeam errors in handling backups,...
whats your protocol to access onsite DDve cifs,nfs?
we always use boost even to a local DD - so much harder to get damaged by a virus and no access for windows admins
For question about replication speed - did you measure the amount of data across the line?
- backup to local disk 7 d (fast restorespeed for your servers)
- copyjob to local DDve with 30d,4w, 12m, 10y
read for daily goes to local disk instead to DDve, so faster processing
also I would use "Read the entire restore point from source...." since you are local DDve is not good in reading a lot
- instead replication use boost to copy to offsite (we have good results in using boost) - so veeam knows about the copies
- as an option you can use snapshots (max age 3 d) on offsite DDve to be save against veeam errors in handling backups,...
whats your protocol to access onsite DDve cifs,nfs?
we always use boost even to a local DD - so much harder to get damaged by a virus and no access for windows admins
For question about replication speed - did you measure the amount of data across the line?
Who is online
Users browsing this forum: Bing [Bot], Google [Bot] and 55 guests