-
- Veeam Legend
- Posts: 1207
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Reference Architecture
Hello,
i know in the past there was a "reference Architecture" Document about Repository Setup.
I remember something about "use faster storage for the local repo and save few restore points on site and use slow storage as copy target and keep more restore points".
But i can't find it. Since we have to purchase new storage to finally say godby to REFS i wanted to try to implement such a concept.
Does someone have a link?
Markus
i know in the past there was a "reference Architecture" Document about Repository Setup.
I remember something about "use faster storage for the local repo and save few restore points on site and use slow storage as copy target and keep more restore points".
But i can't find it. Since we have to purchase new storage to finally say godby to REFS i wanted to try to implement such a concept.
Does someone have a link?
Markus
-
- Chief Product Officer
- Posts: 31905
- Liked: 7402 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Reference Architecture
Honestly, this is basically it as far as the reference architecture goes... there's nothing to add.
I don't remember any documents either, but there was a Veeam blog about this.
-
- VP, Product Management
- Posts: 6035
- Liked: 2863 times
- Joined: Jun 05, 2009 12:57 pm
- Full Name: Tom Sightler
- Contact:
Re: Reference Architecture
I'm disappointed to see you have to move away from ReFS, we have so many customers running on ReFS these days (literally 1000's if not 10's of thousands) and I see very few issues compared to 12-18 months ago, and really, most customers I've spoken with over the last 6 months have no issues at all. Admittedly, almost every customer I work with is still using Windows 2016 and I know you've been fighting with 2019 for months now, so I understand why you might need to make this decision (I've seen very limited field experience from 2019 to this point).
That being said, there are several things to take into consideration with NTFS. Obviously, there's potential for significantly more space usage (if you use synthetic fulls backups or GFS in copy jobs) and the IOP load for merges is far higher. Both of these issues can be somewhat mitigated with forever forward mode, since you only need to merge incremental data, but you have to think about defrag operations/health checks. Other issues are maximum volume size, you can format NTFS larger than 64TB, but there are limitations so just be aware you might need more, smaller volumes.
As far as general reference architecture, in the ideal case we would like to see a fast, IOP optimized repository that can store 7 days worth of backups (you can play with this larger or smaller based on your specific requirements) and then a larger more capacity oriented repo, that can still provide enough IOPS to service the require merges/synthetics (the latter is especially important if using GFS).
I still know of a lot of clients that build identical repositories for onsite/offsite, keeping, for example, 30 days in both. This is also OK as long as they are designed with enough IOPS to get the jobs done.
That being said, there are several things to take into consideration with NTFS. Obviously, there's potential for significantly more space usage (if you use synthetic fulls backups or GFS in copy jobs) and the IOP load for merges is far higher. Both of these issues can be somewhat mitigated with forever forward mode, since you only need to merge incremental data, but you have to think about defrag operations/health checks. Other issues are maximum volume size, you can format NTFS larger than 64TB, but there are limitations so just be aware you might need more, smaller volumes.
As far as general reference architecture, in the ideal case we would like to see a fast, IOP optimized repository that can store 7 days worth of backups (you can play with this larger or smaller based on your specific requirements) and then a larger more capacity oriented repo, that can still provide enough IOPS to service the require merges/synthetics (the latter is especially important if using GFS).
I still know of a lot of clients that build identical repositories for onsite/offsite, keeping, for example, 30 days in both. This is also OK as long as they are designed with enough IOPS to get the jobs done.
-
- Veeam Legend
- Posts: 1207
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Reference Architecture
Gostev,
Thank you.
Perhaps we were too dramatic - we want to get rid of REFS on our primary site as we need to run on 2019 for several reasons and the REFS seems to only have issues doing deletes, also impact is biggest there.
On the remote site it is not so problematic as we do not have synthetic full there and thus only a few blocks are deleted at any one time.
We just feel so left alone with REFS. We burned through a 60 hour premier support case with Microsoft without them starting to understand the issue. Right now we have a 3 months professional case which is also going nowhere.
Microsoft is always telling us "there are very few clients having 600 TB of REFS on one server - this can have unpredictable results - yea, that does not help us.
We reduced retention, disabled per vm, purchased new and faster storage, updated every driver, tried with less backup streams, reinstalled Veeam server twice, purchased more RAM, purchased new Veeam hardware...
Still after all this REFS locked up 5 days of the last 7 days needing a reboot after every try.
Veeam support tries to help but finally can not do much.
Everytime we thing we have it fixed it comes back 1-2 months later.
For 2 years now I pray that nothing happened on the nightly backup window every morning.
And we had so much production outage because of snapshots open > 16 hours...
We just miss the predictable performance of NTFS...
Thank you.
Perhaps we were too dramatic - we want to get rid of REFS on our primary site as we need to run on 2019 for several reasons and the REFS seems to only have issues doing deletes, also impact is biggest there.
On the remote site it is not so problematic as we do not have synthetic full there and thus only a few blocks are deleted at any one time.
We just feel so left alone with REFS. We burned through a 60 hour premier support case with Microsoft without them starting to understand the issue. Right now we have a 3 months professional case which is also going nowhere.
Microsoft is always telling us "there are very few clients having 600 TB of REFS on one server - this can have unpredictable results - yea, that does not help us.
We reduced retention, disabled per vm, purchased new and faster storage, updated every driver, tried with less backup streams, reinstalled Veeam server twice, purchased more RAM, purchased new Veeam hardware...
Still after all this REFS locked up 5 days of the last 7 days needing a reboot after every try.
Veeam support tries to help but finally can not do much.
Everytime we thing we have it fixed it comes back 1-2 months later.
For 2 years now I pray that nothing happened on the nightly backup window every morning.
And we had so much production outage because of snapshots open > 16 hours...
We just miss the predictable performance of NTFS...
-
- Veeam Software
- Posts: 21144
- Liked: 2143 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Reference Architecture
Here's most likely the document you were looking for.
-
- Veeam Legend
- Posts: 1207
- Liked: 417 times
- Joined: Dec 17, 2015 7:17 am
- Contact:
Re: Reference Architecture
Exactly! Thank you very much!
Who is online
Users browsing this forum: Baidu [Spider], Bing [Bot], Semrush [Bot] and 40 guests