-
- Influencer
- Posts: 22
- Liked: 1 time
- Joined: Mar 14, 2014 7:22 pm
- Contact:
Topology recommendations
This is our Backup and DR topology for an Essentials Plus VMWare setup with 3 ESXi hosts. The two ESXi hosts on the main site run around 11 VM's each. The DR site has a host that holds Veeam Replication jobs for every VM and has enough resources to boot them up in a DR situation. The two sites are connected by fiber. The DR site holds our main and only B&R server and one of three DC's. Each ESXi host has a DC VM. The main site holds a physical Veeam Proxy server with local storage for all jobs which is not joined to the domain. It has a Tape Library connected and we run nightly File Server backups to tape and Monthly backups of all servers to tape. These backups are just a copy of the files and folders on the main repository, they are not new or separate backup jobs. I run backup copies to an offsite building NAS connected by fiber. The NAS is connected to the main Veeam server via iSCSI.
We have run Veeam for years. I'm still running Reverse Incremental on all our jobs. It's time to review this. The backup jobs finish quickly, that's not an issue. I was just assuming that Instant Recoveries would be faster with this choice?
I feel fairly good if we lose the entire main site (with Veeam Replication), but I'm looking for any tips/ideas I could make for the general backup or backup copies process?
Should I run a new backup job for the tape jobs or is a copy of the files/folders ok? (all eggs in one basket so to speak).
We have a new NAS and I'm thinking of going NFS instead of iSCSI? I recently had to restore a file from our main file server which is around 2.8 TB in size and I had to restore the files from a monthly tape. This took 13.5 hours just to restore to disk so I could then restore the file from the .vbk. This got me thinking that maybe I should do monthly backups to the new NAS first and then to tape? Would these be incremental? Can you run two separate jobs of one server with the same Veeam server? Does this mess up the change block tag?
Any ideas or improvements would be appreciated!
-
- Influencer
- Posts: 22
- Liked: 1 time
- Joined: Mar 14, 2014 7:22 pm
- Contact:
Re: Topology recommendations
So basically, I'm putting a lot of eggs in one basket, counting on the original backup files to be good. The backup copy jobs just copy those files and the tape jobs just copy those same files to tape.
Should I be doing something different? Currently I do Reverse Incrementals for 14 days and add weekly Storage-level corruption guard and file maintenance instead of any active full periodic backup.
Thanks!
Should I be doing something different? Currently I do Reverse Incrementals for 14 days and add weekly Storage-level corruption guard and file maintenance instead of any active full periodic backup.
Thanks!
-
- Veteran
- Posts: 643
- Liked: 312 times
- Joined: Aug 04, 2019 2:57 pm
- Full Name: Harvey
- Contact:
Re: Topology recommendations
Hey Rascii,
Lot to unpack here, but a few comments:
>We have a new NAS and I'm thinking of going NFS instead of iSCSI?
iscsi will almost certainly be more performant than NFS, plus you can use block cloning with XFS/ReFS on it. I would not use NFS unless it was a necessity (it's not bad, but it also comes with a lot of maintenance IMO)
>The backup copy jobs just copy those files
Not true! A Backup copy isn't a dumb copy, it's a "backup of a backup". The individual blocks necessary to create a restore point are copied from the source job into the backup copy chain -- the same validation done during backups is done during backup copies as well, so you can rest pretty safe if the backup copy at least completes without CRC errors. Health Checks make sleeping at night a lot easier!
>Currently I do Reverse Incrementals
If this works for you, there's no harm in keeping it, but it's also pretty IO intensive. But, if it works it works, there's no harm here.
> I recently had to restore a file from our main file server which is around 2.8 TB in size and I had to restore the files from a monthly tape. This took 13.5 hours just to restore to disk so I could then restore the file from the .vbk.
This is kind of the nature of tape. Veeam's VM to tape solution moves the backup files in a smart way, and since the read pattern necessary for file level recoveries would be catastrophic to your tape hardware, staging the backup first is required. How often are you finding you need to do recoveries from such points on tapes? It might be better to revisit the files you are needing to protect and see if there's a better way of protecting those (maybe file share backup works for you!), or maybe looking towards S3 providers for immutable protection.
Overall it sounds like aside from a few misconceptions, you actually have a very nice set up, and if it ain't broke, don't fix it. You have really good off-sites with tape and BCJ, your backup/restore times from disk seem normal, I think there's no real obvious flaw. I would just revisit what you find yourself pulling from tape frequently and rethink how you're protecting that data.
Lot to unpack here, but a few comments:
>We have a new NAS and I'm thinking of going NFS instead of iSCSI?
iscsi will almost certainly be more performant than NFS, plus you can use block cloning with XFS/ReFS on it. I would not use NFS unless it was a necessity (it's not bad, but it also comes with a lot of maintenance IMO)
>The backup copy jobs just copy those files
Not true! A Backup copy isn't a dumb copy, it's a "backup of a backup". The individual blocks necessary to create a restore point are copied from the source job into the backup copy chain -- the same validation done during backups is done during backup copies as well, so you can rest pretty safe if the backup copy at least completes without CRC errors. Health Checks make sleeping at night a lot easier!
>Currently I do Reverse Incrementals
If this works for you, there's no harm in keeping it, but it's also pretty IO intensive. But, if it works it works, there's no harm here.
> I recently had to restore a file from our main file server which is around 2.8 TB in size and I had to restore the files from a monthly tape. This took 13.5 hours just to restore to disk so I could then restore the file from the .vbk.
This is kind of the nature of tape. Veeam's VM to tape solution moves the backup files in a smart way, and since the read pattern necessary for file level recoveries would be catastrophic to your tape hardware, staging the backup first is required. How often are you finding you need to do recoveries from such points on tapes? It might be better to revisit the files you are needing to protect and see if there's a better way of protecting those (maybe file share backup works for you!), or maybe looking towards S3 providers for immutable protection.
Overall it sounds like aside from a few misconceptions, you actually have a very nice set up, and if it ain't broke, don't fix it. You have really good off-sites with tape and BCJ, your backup/restore times from disk seem normal, I think there's no real obvious flaw. I would just revisit what you find yourself pulling from tape frequently and rethink how you're protecting that data.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Topology recommendations
Not notably, unless you have an extremely long incremental chain.
Provided you test your backups (think SureBackup), it is totally ok to then copy those offsite (health check ensures they are bit identical) and tape out.
Sure, you can, this doesn't mess CBT up.
-
- Influencer
- Posts: 22
- Liked: 1 time
- Joined: Mar 14, 2014 7:22 pm
- Contact:
Re: Topology recommendations
Tape restores are not very common, just happened to be so in this case. I should also mention I have encryption enabled on every job. I assume that's best practice? I had read that iSCSI can be complicated or if there are issues you could lose data? So I was thinking maybe keep the iSCSI NAS and add this new NAS as a different protocol to keep the eggs out of the same basket? I read SMB was not the best because of write-through behavior? That's why I thought maybe NFS was more reliable?
The answers so far have been a great help! I'm dedicating the next couple days to read as much as possible to make sure we're doing best practice. If there are any great links I would welcome that too.
The answers so far have been a great help! I'm dedicating the next couple days to read as much as possible to make sure we're doing best practice. If there are any great links I would welcome that too.
-
- Expert
- Posts: 119
- Liked: 11 times
- Joined: Nov 16, 2020 2:58 pm
- Full Name: David Dunworthy
- Contact:
Re: Topology recommendations
Between nfs and smb you definitely choose nfs. But if iscsi is available you use this as mentioned earlier it will allow you to then use refs or xfs.
I personally am using scale out repositories and aws s3 with immutability. You might consider moving away from tapes entirely and going to this.
1. Create sobr.
2. Specify your existing nas backups as performance tier.
3. Specify an s3 bucket as capacity tier.
Enable gfs in your main backup job and let the sobr policy use copy mode and move mode. So your short term chains will be in both places and your long term stuff in cloud s3 bucket only.
I just completed a project where I moved from tapes to this and it is awesome now. Maybe a few minutes to restore an old file vs having tapes literally delivered to me a day later, and then restoring that to disk, and then restoring the actually files off that just like you experienced.
I personally am using scale out repositories and aws s3 with immutability. You might consider moving away from tapes entirely and going to this.
1. Create sobr.
2. Specify your existing nas backups as performance tier.
3. Specify an s3 bucket as capacity tier.
Enable gfs in your main backup job and let the sobr policy use copy mode and move mode. So your short term chains will be in both places and your long term stuff in cloud s3 bucket only.
I just completed a project where I moved from tapes to this and it is awesome now. Maybe a few minutes to restore an old file vs having tapes literally delivered to me a day later, and then restoring that to disk, and then restoring the actually files off that just like you experienced.
-
- Veteran
- Posts: 643
- Liked: 312 times
- Joined: Aug 04, 2019 2:57 pm
- Full Name: Harvey
- Contact:
Re: Topology recommendations
Got a link? I fail to see how a protocol can care about whether or not the data being send was encrypted or not
-
- Influencer
- Posts: 22
- Liked: 1 time
- Joined: Mar 14, 2014 7:22 pm
- Contact:
Re: Topology recommendations
I did not mean to imply that encryption was related to any iscsi issues.
This link is a lot of reading, but it was one that gave me some of the information. veeam-backup-replication-f2/gostev-s-di ... 63695.html
Particularly this comment below... I don't see where Gostev replied to the iSCSI part of his post, so his reasoning might not be valid?
* last, but not least, when we have 1 proxy on each host and one host dies, the recovery process includes connecting that LUN to different proxy on another host - leading to a risk of connecting to a LUN from more than 1 location, due to human mishap. For those that don't know what this means: it's kind of like connecting a SATA cable from one disk to two computers in parallel. In practice, it means instant corruption of your backup repository, precisely the one you need right now to perform a restore. That's risky ! With SMB that simply can't happen. And backup is all about lowering risks, so this reason alone is enough to scare people away from iSCSI.
So "just use iSCSI" may sound like a good solution, but in practice it does introduces other problems and potentially also some very dangerous risks. Please fell free correct me if I missed something in above reasoning, maybe there's something I don't know, but that's how I see it with my current knowledge.
This link is a lot of reading, but it was one that gave me some of the information. veeam-backup-replication-f2/gostev-s-di ... 63695.html
Particularly this comment below... I don't see where Gostev replied to the iSCSI part of his post, so his reasoning might not be valid?
* last, but not least, when we have 1 proxy on each host and one host dies, the recovery process includes connecting that LUN to different proxy on another host - leading to a risk of connecting to a LUN from more than 1 location, due to human mishap. For those that don't know what this means: it's kind of like connecting a SATA cable from one disk to two computers in parallel. In practice, it means instant corruption of your backup repository, precisely the one you need right now to perform a restore. That's risky ! With SMB that simply can't happen. And backup is all about lowering risks, so this reason alone is enough to scare people away from iSCSI.
So "just use iSCSI" may sound like a good solution, but in practice it does introduces other problems and potentially also some very dangerous risks. Please fell free correct me if I missed something in above reasoning, maybe there's something I don't know, but that's how I see it with my current knowledge.
Who is online
Users browsing this forum: Google [Bot], restore-helper, Semrush [Bot] and 66 guests