-
- Veteran
- Posts: 370
- Liked: 97 times
- Joined: Dec 13, 2015 11:33 pm
- Contact:
Question re ReFS 3 speed benefits and RAID6
This is probably not something anyone can answer for sure at this point, but i'm wondering if some of the Veeam guys or guys with preview access to 9.5 might have a stab at it.
Say we have a repository setup with, say, 20 x 4TB disks currently in RAID10 giving us a 40TB volume and use reverse incremental. Would the block improvements in ReFS 3/Server 2016 make it viable to switch to RAID6 without sacrificing too much overall speed?
Obviously it's a fairly difficult thing to quantify as everyone's specific use case is different but it seems like the sort of thing that a lot of people might be very interested in and I'm wondering if Veeam has done any internal testing which might be available.
Say we have a repository setup with, say, 20 x 4TB disks currently in RAID10 giving us a 40TB volume and use reverse incremental. Would the block improvements in ReFS 3/Server 2016 make it viable to switch to RAID6 without sacrificing too much overall speed?
Obviously it's a fairly difficult thing to quantify as everyone's specific use case is different but it seems like the sort of thing that a lot of people might be very interested in and I'm wondering if Veeam has done any internal testing which might be available.
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Question re ReFS 3 speed benefits and RAID6
I don't know if anyone has done any testing with reverse incremental since, as a general rule, the forward incremental modes are recommended in the vast majority of cases. That's certainly been the focus of all of my testing to this point.
Since the primary reason to go with RAID10 over RAID6 was due to the IOPS required for synthetic operations I can certainly say that there is almost zero reason to use RAID10 for ReFS with any of the forward incremental options. I suspect it would have a huge impact on performance with reverse incremental as well, but I certainly haven't tested it. I'll set up a test in my lab but it will probably take a day or two to get any reasonable results.
Since the primary reason to go with RAID10 over RAID6 was due to the IOPS required for synthetic operations I can certainly say that there is almost zero reason to use RAID10 for ReFS with any of the forward incremental options. I suspect it would have a huge impact on performance with reverse incremental as well, but I certainly haven't tested it. I'll set up a test in my lab but it will probably take a day or two to get any reasonable results.
-
- Veteran
- Posts: 370
- Liked: 97 times
- Joined: Dec 13, 2015 11:33 pm
- Contact:
Re: Question re ReFS 3 speed benefits and RAID6
Hi Tom
That would be awesome, in our case we went with reverse as we wanted to get full's to tape and doing virtual full's had the same effect as the synthetic operations to the point the array couldn't keep up with the tape drives so we switched over to reverse. Since I'm going to have to reformat anyway to get ReFS it's a perfect time to rethink the RAID level too.
Dave
That would be awesome, in our case we went with reverse as we wanted to get full's to tape and doing virtual full's had the same effect as the synthetic operations to the point the array couldn't keep up with the tape drives so we switched over to reverse. Since I'm going to have to reformat anyway to get ReFS it's a perfect time to rethink the RAID level too.
Dave
-
- VP, Product Management
- Posts: 6035
- Liked: 2860 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Question re ReFS 3 speed benefits and RAID6
So I have done some basic testing, enough to say that reverse incremental with ReFS+block clone is similar in performance to forward incremental, although the exact profile is slightly different. My test is not on a huge dataset, but it is directed at a "worst case" storage system, a RAID5 array with only three 3TB SATA drives, so not much performance there.
The backup size source data size is ~435GB, with about 300GB used, and the full backup is roughly 120GB on disk after compression. These test were done with per-VM backup files so keep that in mind. Typical daily change was around 100GB raw, 25-30GB in the VIB/VRB file after compression/dedupe. The exact same VMs were backed up to four different repository type using the same back end disks:
NTFS -- Forward Incremental
NTFS -- Reverse Incremental
ReFS -- Forward Incremental
ReFS -- Reverse Incremental
Full backups performance was, as expected, very similar for all cases, with ReFS being about 3-5% slower, but I'd consider the difference non-interesting and would really need to run the test multiple times to make sure it wasn't just minor testing variation. It was incremental backups that we were interested in, below I'll share the averages of these jobs over 7 days, as well as the worst case times, to give some idea of the performance difference. For forward incremental I'll break out the backup time vs the merge time because I think that's important in indicating the differences in snapshot open times for VMs vs disk processing time.
All Jobs
Average Data Read/day: ~90GB
Average backup size/day: ~30GB
Worst case was a change rate of nearly 135GB
NTFS -- Forward Incremental
Average time to backup all 7 VMs/day: ~11min (worst case was ~20 min)
Average merge time/day: ~24min (worse case was ~44 min)
Average total time/day: ~35min
NTFS -- Reverse Incremental
Average time to backup all 7 VMs/day: ~45min (worst case was ~70 min)
ReFS -- Forward Incremental
Average time to backup all 7 VMs/day: ~11min (worst case was ~20 min)
Average merge time/day: ~2m: (worse case was ~3 min)
Average total time/day: ~13min
ReFS -- Reverse Incremental
Average time to backup all 7 VMs/day: ~13min (worst case was ~20 min)
As you can see, in this fairly small scale testing, reverse incremental was roughly 3.5x faster on ReFS vs NTFS on the same back end disk storage, that's a pretty huge improvement and should be more than enough to make up for the switch from RAID10 to RAID6, especially since the sequential write performance should actually increase with RAID6. The move to ReFS + block clone so significantly reduces the random I/O requirement that the major benefit of moving from RAID6 to RAID10, which was increased random I/O, doesn't really exist anymore.
I'll try to stand up and perform a more scale test in the future, this was really more of an informal test since I had a few spare cycles in the lab, but I think it indicates the huge benefits of ReFS + block clone even for reverse incremental.
The backup size source data size is ~435GB, with about 300GB used, and the full backup is roughly 120GB on disk after compression. These test were done with per-VM backup files so keep that in mind. Typical daily change was around 100GB raw, 25-30GB in the VIB/VRB file after compression/dedupe. The exact same VMs were backed up to four different repository type using the same back end disks:
NTFS -- Forward Incremental
NTFS -- Reverse Incremental
ReFS -- Forward Incremental
ReFS -- Reverse Incremental
Full backups performance was, as expected, very similar for all cases, with ReFS being about 3-5% slower, but I'd consider the difference non-interesting and would really need to run the test multiple times to make sure it wasn't just minor testing variation. It was incremental backups that we were interested in, below I'll share the averages of these jobs over 7 days, as well as the worst case times, to give some idea of the performance difference. For forward incremental I'll break out the backup time vs the merge time because I think that's important in indicating the differences in snapshot open times for VMs vs disk processing time.
All Jobs
Average Data Read/day: ~90GB
Average backup size/day: ~30GB
Worst case was a change rate of nearly 135GB
NTFS -- Forward Incremental
Average time to backup all 7 VMs/day: ~11min (worst case was ~20 min)
Average merge time/day: ~24min (worse case was ~44 min)
Average total time/day: ~35min
NTFS -- Reverse Incremental
Average time to backup all 7 VMs/day: ~45min (worst case was ~70 min)
ReFS -- Forward Incremental
Average time to backup all 7 VMs/day: ~11min (worst case was ~20 min)
Average merge time/day: ~2m: (worse case was ~3 min)
Average total time/day: ~13min
ReFS -- Reverse Incremental
Average time to backup all 7 VMs/day: ~13min (worst case was ~20 min)
As you can see, in this fairly small scale testing, reverse incremental was roughly 3.5x faster on ReFS vs NTFS on the same back end disk storage, that's a pretty huge improvement and should be more than enough to make up for the switch from RAID10 to RAID6, especially since the sequential write performance should actually increase with RAID6. The move to ReFS + block clone so significantly reduces the random I/O requirement that the major benefit of moving from RAID6 to RAID10, which was increased random I/O, doesn't really exist anymore.
I'll try to stand up and perform a more scale test in the future, this was really more of an informal test since I had a few spare cycles in the lab, but I think it indicates the huge benefits of ReFS + block clone even for reverse incremental.
-
- Veteran
- Posts: 370
- Liked: 97 times
- Joined: Dec 13, 2015 11:33 pm
- Contact:
Re: Question re ReFS 3 speed benefits and RAID6
Wow, that's much better than I was expecting and i had pretty high hopes.
Thanks for this, it's awesome and makes the transition to RAID6 a non-issue.
Dave
Thanks for this, it's awesome and makes the transition to RAID6 a non-issue.
Dave
-
- Product Manager
- Posts: 14839
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Question re ReFS 3 speed benefits and RAID6
I also did some tests that might be relevant for really small environments or branch offices. It shows that even in small environments the benefit of ReFS is significant.
My setup:
Results for Backup:
I has also some positive effects on disk usage:
wheras the filesizes shown in explorer are different:
Restore speed for full disk restore and ReFS restore are almost equal in my environment because I only have 1GBit network...
My setup:
Results for Backup:
I has also some positive effects on disk usage:
wheras the filesizes shown in explorer are different:
Restore speed for full disk restore and ReFS restore are almost equal in my environment because I only have 1GBit network...
-
- Veteran
- Posts: 385
- Liked: 39 times
- Joined: Oct 17, 2013 10:02 am
- Full Name: Mark
- Location: UK
- Contact:
Re: Question re ReFS 3 speed benefits and RAID6
This is pretty exciting. RAID10 performance using RAID6.
I presume Restores will be the same, or does it manage large fragmented files better?
Thanks.
I presume Restores will be the same, or does it manage large fragmented files better?
Thanks.
-
- Influencer
- Posts: 17
- Liked: 2 times
- Joined: Oct 23, 2013 6:15 am
- Full Name: Janåke Rönnblom
- Contact:
Re: Question re ReFS 3 speed benefits and RAID6
Have you at Veeam done any more testing?
Or does someone have experience from running it in production?
-J
Or does someone have experience from running it in production?
-J
-
- Lurker
- Posts: 2
- Liked: never
- Joined: Nov 08, 2016 2:20 pm
- Full Name: Ronald Adkins
- Contact:
[MERGED] Reverse incremental backups to Refs???
On another thread (linked below) I saw where tsightler had done some basic testing where reverse incrementals were about 3.5 times faster when going to Refs when compared to NTFS. I have done similar testing and I only see about a 5% increase in speed. When I do forward incrementals the difference in speed is night and day, but when doing reverse incrementals not so much. Just wondering what kind of performance others are seeing when doing reverse incrementals to ReFS repositories.
veeam-backup-replication-f2/question-re ... 38331.html
veeam-backup-replication-f2/question-re ... 38331.html
Who is online
Users browsing this forum: AlexLeadingEdge, Bing [Bot], wmiller203405 and 296 guests