As a result, restore of thin disks involves extra allocation overhead if compared to restore of thick disks, which results in decreased performance.
In our environment with Dell SC (Compellent) 7.2, using the thin disk option is almost anywhere between 10-20x slower than EagerZeroedThick. For smaller machines, restoring as EagerZeroedThick is not such a big deal because Dell SC does not write zeroes to back end and the accelerated creation operation is fast, but for larger machines creating the EagerZeroedThick disk itself takes hours before any data begins restoring -- which makes on wonder if the accelerated creation operation is functioning correctly.
Is everyone is experiencing a difference of such magnitude between the two disk formats when doing a full restore? When restoring to thin, the proxy which is restoring the data via hotAdd experiences virtual disk latencies greater than 200ms.
Yes, that is possible, but not practical in a reasonable amount of time either. When the proxy VM is writing out data in Thin format, it experiences in our environment extreme latencies (250ms per io) leading to extremely poor restore throughput. Thus part of my question is if this is unique to our configuration or if others have experienced this.
DavoudTeimouri wrote: ↑Oct 26, 2018 7:03 am
What about LazyZeroed?
LazyZeroed behaves the same as thin.
DavoudTeimouri wrote: ↑Oct 26, 2018 7:03 am
Also if you have high latency, you have big problem with storage array and using thin or thick is not different to you.
latency of 250ms when using thin or lazyZeroed, 5ms when using EagerZeroedThick.
I wouldn't say this is unique to your configuration, since initially there was no eager zeroed disks restore mode in Veeam B&R (probably due to the behavior you see for large disks, when disk preallocation takes quite long). Several versions ago we implemented it based on customers' requests.
evilaedmin wrote: ↑Sep 27, 2018 9:11 pm for larger machines creating the EagerZeroedThick disk itself takes hours before any data begins restoring -- which makes on wonder if the accelerated creation operation is functioning correctly.
Dell found a bug / unfortunate behavior with Dell SC and accelerated EagerZeroedThick disk creation:
Even though the creation of EagerZeroedThick volume by Veeam should be an accelerated operation by Dell SC:
If there are existing "unowned" pages as part of the destination SC Volume, the array will try to UNMAP the page to free it before performing the accelerated WRITE SAME of zeroes. An unowned page is a page that is in use by the volume but eligible to be reclaimed by UNMAP to be returned to the free page pool via free space reclamation. Unfortunately the process of freeing a page, then performing an accelerated write of zeroes, is much slower than if the host simply did an un-accelerated stream of zeroes in the first place.
The work around appears to be to perform ESXi free space reclamation on volumes before doing any sort of EagerZeroedThick disk creation.
No information yet. And with the new midrange storage announcement coming soon from Dell that is rumored to consolidate all the different product lines, I don't have much 'hope' for the future of SC, speaking purely as a customer.