-
- Novice
- Posts: 9
- Liked: 7 times
- Joined: Aug 18, 2016 6:16 pm
- Full Name: Bert
- Contact:
Thanks for the Veeam and ReFS Magic
Hello all,
Not to distract from folks' important technical questions around here, but I just wanted to stop in and say thank you for all the work Veeam put into ReFS integration. We already had one of our sites running ReFS with good results (about 40% savings I believe), but our other site had been having issues with storage space. We had two 50TB NTFS volumes, one of which held the majority of our Veeam backups for the site. I had to manually delete backups every week before the retention schedule was up. We didn't have the budget for more storage, so we took the time to move the backups to old storage, reinstall the server with Win 2016, format ReFS, and move the files back. We are now saving 45.7% space (64.7TB stored within 35.1TB actual space) without using any Windows Dedup. Plus, I don't have to delete anything before retention schedule removes it automatically. While the upgrade and moving was a pain, it was 100% worth it. We'll have enough storage until next year's budget allows for purchasing more, and I'm not even sure if we'll need it then.
To my vendors selling backup dedup appliances: Don't need them; Don't want them; Stop asking.
Thanks again,
BertM
Not to distract from folks' important technical questions around here, but I just wanted to stop in and say thank you for all the work Veeam put into ReFS integration. We already had one of our sites running ReFS with good results (about 40% savings I believe), but our other site had been having issues with storage space. We had two 50TB NTFS volumes, one of which held the majority of our Veeam backups for the site. I had to manually delete backups every week before the retention schedule was up. We didn't have the budget for more storage, so we took the time to move the backups to old storage, reinstall the server with Win 2016, format ReFS, and move the files back. We are now saving 45.7% space (64.7TB stored within 35.1TB actual space) without using any Windows Dedup. Plus, I don't have to delete anything before retention schedule removes it automatically. While the upgrade and moving was a pain, it was 100% worth it. We'll have enough storage until next year's budget allows for purchasing more, and I'm not even sure if we'll need it then.
To my vendors selling backup dedup appliances: Don't need them; Don't want them; Stop asking.
Thanks again,
BertM
-
- Enthusiast
- Posts: 92
- Liked: 14 times
- Joined: Jan 28, 2011 4:40 pm
- Full Name: Taylor B.
- Contact:
Re: Thanks for the Veeam and ReFS Magic
I just learned about ReFS integration at VeeamOn last month and have already implemented it. I doubled the amount of restore points I am keeping and synthetic fulls went from 12-20 hours to less than 1 hour.
So I will join you in thanking Veeam for this amazing feature!
So I will join you in thanking Veeam for this amazing feature!
-
- Expert
- Posts: 186
- Liked: 22 times
- Joined: Mar 13, 2019 2:30 pm
- Full Name: Alabaster McJenkins
- Contact:
Re: Thanks for the Veeam and ReFS Magic
It is very awesome, but one of the big "gotcha's" that I see is that once you get a long term retention of months/years set up in ReFS, and then say you want to migrate all that to a new storage device, you LOSE all the block clone savings. You would need mountains of space on your new system to move it all, or you would have to just start a new chain.
But imagine starting a new chain when you have years of retention on the old storage? You would have to hope that old storage can stick around and work for years until retention passes and you don't need it.
Now having said all of this, it is an ReFS/Microsoft limitation and not Veeam, but I wish Microsoft could figure this out.
But imagine starting a new chain when you have years of retention on the old storage? You would have to hope that old storage can stick around and work for years until retention passes and you don't need it.
Now having said all of this, it is an ReFS/Microsoft limitation and not Veeam, but I wish Microsoft could figure this out.
-
- Veeam Legend
- Posts: 1203
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Thanks for the Veeam and ReFS Magic
Is it? Veeam should be able to transfer the old backups and then create new references should it not?
It must not be fast but it should be possible...
It must not be fast but it should be possible...
-
- Chief Product Officer
- Posts: 31803
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Thanks for the Veeam and ReFS Magic
Yes, it should be possible on our end, and we do have this feature tracked under the "internal data mover" name... hopefully, one day I will have dev resources to implement one. It's always hard to find resources for these sort of "nice to have" features, but I try to squeeze some of them in every release. Devs generally don't like them, because they don't feel these make a big difference everyone wants to be building the Next Big Thing!
-
- Service Provider
- Posts: 70
- Liked: 10 times
- Joined: Jul 27, 2016 1:39 am
- Full Name: Steve Hughes
- Contact:
Re: Thanks for the Veeam and ReFS Magic
Gostev,
Customer feedback ... I for one would love the ability to move backups to new storage without losing the space savings. Assuming that it's do-able I think it should be right up there as a 'next big thing'.
Steve.
Customer feedback ... I for one would love the ability to move backups to new storage without losing the space savings. Assuming that it's do-able I think it should be right up there as a 'next big thing'.
Steve.
-
- Service Provider
- Posts: 275
- Liked: 61 times
- Joined: Nov 17, 2014 1:48 pm
- Full Name: Florin
- Location: Switzerland
- Contact:
Re: Thanks for the Veeam and ReFS Magic
Another Customer feedback here:
As a service provider, it's quite a big limitation for us, that we aren't able to move backups with space savings. For example: We have a few customers, asking for long time retention (10 years) of their Cloud Connect Data. But we couldn't offer it, because we simply can't promise, that we can keep their data for 10 years on our ReFS storage. If we ever have to move their backups to a new repository (and im pretty sure that this will have to happen 1-3 times during those 10years), we would have to buy a damn skyscraper full of storage. So, such a movement-tool would be a fantastic thing!
As a service provider, it's quite a big limitation for us, that we aren't able to move backups with space savings. For example: We have a few customers, asking for long time retention (10 years) of their Cloud Connect Data. But we couldn't offer it, because we simply can't promise, that we can keep their data for 10 years on our ReFS storage. If we ever have to move their backups to a new repository (and im pretty sure that this will have to happen 1-3 times during those 10years), we would have to buy a damn skyscraper full of storage. So, such a movement-tool would be a fantastic thing!
-
- Enthusiast
- Posts: 82
- Liked: 11 times
- Joined: Nov 11, 2016 8:56 am
- Full Name: Oliver
- Contact:
Re: Thanks for the Veeam and ReFS Magic
Gonna hijack this also for the Backup-Data Moving (sorry!). It would be great to have this feature.
And thanks OP for this information! Didn't knew that ReFS could save something without enabling deduplication (altough i'm quite happy with Windows Dedup on NTFS and ReFS Volumes in my Labs - which is far from multiple TB ).
And thanks OP for this information! Didn't knew that ReFS could save something without enabling deduplication (altough i'm quite happy with Windows Dedup on NTFS and ReFS Volumes in my Labs - which is far from multiple TB ).
-
- Service Provider
- Posts: 129
- Liked: 27 times
- Joined: Apr 01, 2016 5:36 pm
- Full Name: Olivier
- Contact:
Re: Thanks for the Veeam and ReFS Magic
As a side note,
ReFS creates "fragmentation" over time, increase random accesses and slow down restores if the spindles count doesn't follow. They are no easy way to quantize this so I would recommend to run a full active every few months.
An experience return or general guidance would be welcome by many of us
Oli
ReFS creates "fragmentation" over time, increase random accesses and slow down restores if the spindles count doesn't follow. They are no easy way to quantize this so I would recommend to run a full active every few months.
An experience return or general guidance would be welcome by many of us
Oli
-
- Chief Product Officer
- Posts: 31803
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Thanks for the Veeam and ReFS Magic
We don't really get too many complaints in this regards to ReFS fragmentation.
I suspect that the reason is that Veeam uses fairly large block sizes (512KB on average with default settings), so fragmentation does not impact performance as much unless, as you correctly mentioned, spindle count is very low - and thus backup storage lacks IOPS in general (which is the real issue).
Luckily, most customers out there can still quadruple their backup storage IOPS capacity by using Veeam-tuned RAID stripe size (we recommend 256KB) as opposed to a few times smaller default (usually 64KB).
I suspect that the reason is that Veeam uses fairly large block sizes (512KB on average with default settings), so fragmentation does not impact performance as much unless, as you correctly mentioned, spindle count is very low - and thus backup storage lacks IOPS in general (which is the real issue).
Luckily, most customers out there can still quadruple their backup storage IOPS capacity by using Veeam-tuned RAID stripe size (we recommend 256KB) as opposed to a few times smaller default (usually 64KB).
-
- Veteran
- Posts: 528
- Liked: 144 times
- Joined: Aug 20, 2015 9:30 pm
- Contact:
Re: Thanks for the Veeam and ReFS Magic
I run my repo inside a VM, so to move it to new hardware, I just have to move the vhdx file. Of course, with a repo of 100tb, that's easier said than done!backupquestions wrote: ↑Jun 10, 2019 7:42 pm It is very awesome, but one of the big "gotcha's" that I see is that once you get a long term retention of months/years set up in ReFS, and then say you want to migrate all that to a new storage device, you LOSE all the block clone savings. You would need mountains of space on your new system to move it all, or you would have to just start a new chain.
But imagine starting a new chain when you have years of retention on the old storage? You would have to hope that old storage can stick around and work for years until retention passes and you don't need it.
Now having said all of this, it is an ReFS/Microsoft limitation and not Veeam, but I wish Microsoft could figure this out.
-
- Service Provider
- Posts: 129
- Liked: 27 times
- Joined: Apr 01, 2016 5:36 pm
- Full Name: Olivier
- Contact:
Re: Thanks for the Veeam and ReFS Magic
Much of the people, doesn’t need to be convinced about storage space gains, backup performances when it comes to ReFS. I had to migrate a fair amount out repository about 3 TB physical in size, much more on a backup logical side and it was stored under RAID5 8+1 (600 GB spindle SAS 10K – FCGB) due to the fact it was slowly dying one. A soon I launch my copy for a few minutes nothing happens transfer-wise but disk I/O was happening. Suddenly it started to transfer. Incremental were transferring about 120 Mb/s but when I was hitting a VBK it slowed to down from 40 to 60 Mb/s. Since then, I include a full active time to time and hope to mitigate a little this slowdown.Gostev wrote: ↑Jun 17, 2019 8:16 pm We don't really get too many complaints in this regards to ReFS fragmentation.
I suspect that the reason is that Veeam uses fairly large block sizes (512KB on average with default settings), so fragmentation does not impact performance as much unless, as you correctly mentioned, spindle count is very low - and thus backup storage lacks IOPS in general (which is the real issue).
While we are talking about ReFS, I would like to deviate a little to the deduplication subject. Microsoft ReFS (or NTFS) depuplication happens at the file level based decision. It is known the biggest file, well less support from Microsoft, etc. I think would be a neat feature when dedup is enabled to split. VBK in smaller chunks (e.g. 1TB)
Oli
-
- Service Provider
- Posts: 129
- Liked: 27 times
- Joined: Apr 01, 2016 5:36 pm
- Full Name: Olivier
- Contact:
Re: Thanks for the Veeam and ReFS Magic
Actually, that's a very clever solution! Let's hope Gostev's Dev won't see that one or the internal moving agent won't be happening soon
Oli
-
- Service Provider
- Posts: 153
- Liked: 34 times
- Joined: Dec 18, 2017 8:58 am
- Full Name: Bill Couper
- Contact:
Re: Thanks for the Veeam and ReFS Magic
TLDR; +1 Moving backup files without losing ReFS space savings, please implement this! If Veeam can implement this inside the backup software, I think this would be a 'killer' and 'must have' feature. It is hard to recommend ReFS without it.
I work at a small service provider. We have on-premise managed customers with their own servers, and we also host some VM's in our small datacenter. In our datacenter, we use VM repo servers with ReFS volumes for >18 months now. We have ~1PB of backup files on ReFS repositories.
Balancing extents is an ongoing, almost daily issue. We can't predict customer growth with certainty, and we don't want any of the per-VM backup chains to 'spill' across to a new extent. Since we are using VM repo servers this caused some issues for some of the earlier SOBR's where they had a small number of large extents. Since then I have found it better to build SOBR's with lots of smaller extents, spreading the VM's out as much as possible - we then monitor the extent utilization daily and extend them as needed. This has worked fairly well but it's not ideal. It's particularly bad around Veeam patch time when things tend to break pretty badly.
Let's not even talk about the time I had to rehydrate 350TB of VBK's onto tapes, simply because Microsoft gave us no way to transfer ReFS data intact (like they have done with dedup). I was cursing a LOT on that day
Edit: yes, I know tapes are not ReFS... what I mean is that I needed to make a second copy of a bunch of backup files. I wanted to copy to disk but ended up having to dump it to tape.
I work at a small service provider. We have on-premise managed customers with their own servers, and we also host some VM's in our small datacenter. In our datacenter, we use VM repo servers with ReFS volumes for >18 months now. We have ~1PB of backup files on ReFS repositories.
Balancing extents is an ongoing, almost daily issue. We can't predict customer growth with certainty, and we don't want any of the per-VM backup chains to 'spill' across to a new extent. Since we are using VM repo servers this caused some issues for some of the earlier SOBR's where they had a small number of large extents. Since then I have found it better to build SOBR's with lots of smaller extents, spreading the VM's out as much as possible - we then monitor the extent utilization daily and extend them as needed. This has worked fairly well but it's not ideal. It's particularly bad around Veeam patch time when things tend to break pretty badly.
Let's not even talk about the time I had to rehydrate 350TB of VBK's onto tapes, simply because Microsoft gave us no way to transfer ReFS data intact (like they have done with dedup). I was cursing a LOT on that day
Edit: yes, I know tapes are not ReFS... what I mean is that I needed to make a second copy of a bunch of backup files. I wanted to copy to disk but ended up having to dump it to tape.
-
- Service Provider
- Posts: 70
- Liked: 10 times
- Joined: Jul 27, 2016 1:39 am
- Full Name: Steve Hughes
- Contact:
Re: Thanks for the Veeam and ReFS Magic
Say what? Can you elaborate? A quadrupling of IOPS capacity is not easy to pass up. I do usually configure my RAIDs with large stripes, but is there an optimum size for a Veeam repo, and what's the reasoning behind that?
-
- Chief Product Officer
- Posts: 31803
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Thanks for the Veeam and ReFS Magic
With the default settings, average Veeam backup file block size is around 512KB. Reading this block from RAID with 64KB stripe size will touch 8 drives requiring 8 IOPS, while reading the same block from RAID with the recommended 256KB stripe size will touch only 2 drives requiring 2 IOPS - which is 4 times less. This is oversimplified of course, but you get an idea.
-
- Service Provider
- Posts: 70
- Liked: 10 times
- Joined: Jul 27, 2016 1:39 am
- Full Name: Steve Hughes
- Contact:
Re: Thanks for the Veeam and ReFS Magic
Yep indeed. Thanks.
Who is online
Users browsing this forum: Semrush [Bot] and 132 guests