Comprehensive data protection for all workloads
Post Reply
rreed
Veteran
Posts: 354
Liked: 73 times
Joined: Jun 30, 2015 6:06 pm
Contact:

About to insert dedicated backup NAS into existing setup

Post by rreed »

Afternoon all. We have an existing Veeam B&R setup in our environment; Veeam server, several proxies, all backing up directly to dedupe devices (appliance/hot-add mode, basic CIFS shares repositories). I've scrounged up a couple old beater iSCSI SANS that I'm going to "insert" b/t my Veeam setup and our dedupe devices as landing zone/staging area. Right now it's B2Dedupe, but I'm shooting for B2D2dedupe (and tape as well, but we'll get to that later).

Our VMware environment is all iSCSI back end, and the proxies just hot-add and send it over the general data network to our dedupe devices. Correct? What I'd love is to run SAN mode to hit the LZ, then from the LZ to dedupe device via whatever gets it there (presumably same data network the same way it currently does). Presumably job setup would be normal backup job to the LZ, then a copy job from the LZ to dedupe once all is said and done.

iSCSI setup, do I need to make my backup SAN part of my VMware environment as datastores, simply presented and connected to hosts but not added as datastores, or ? Details please? My thought is to keep the backup SAN completely separate so the other admins don't accidentally "borrow" some of my backup SAN space via migrating VM's to it. If I can't that's fine but would be preferable. Having it separate and the proxies/Veeam server connect via iSCSI initiator for exclusive access would be great, but I'm not sure how the data would get from our VMware datastore SAN to our backup SAN at SAN speed.

Our current jobs and existing data all point directly to our dedupes. I haven't done the math yet but I'm hoping for at least a couple days' worth of backups on the LZ, then long-term of course goes to dedupe. Would I need to disable our current jobs, setup new ones to point to the LZ, w/ the copy jobs, etc. or would it be possible to split out the chains to say, most recent chain gets moved to the LZ but older stuff remains on dedupe and v8 is smart enough to map? If that makes sense. I'd love to incorporate our existing backup data if possible, but if we have to make a cut from old job data, start w/ new, then once our retention period passes on the calendar, just delete the old data manually, that's fine. If that also made sense. I realize this might be a convoluted request.

All said and done, I also need to archive to tape via dedicated tape backup server (tape libarary, SAS). I currently have it configured and working and playing around w/ it. What I'd like is the above w/ whatever few days' worth of most recent backups to LZ, copy to dedupe for long-term archival, but tape as well for permanent archival. What I really need is a "one-to-many" backup job. Backs up to LZ, then copies to dedupe AND tape (or whatever else ya got; cloud, whatever). Is it possible w/ either v8 or even v9? Would it be a separate copy job in v8 as a workaround? Copy job A copies it to dedupe; Copy job B copies it to tape?

Many thanks and sorry for the long-windedness.
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
skrause
Veteran
Posts: 487
Liked: 106 times
Joined: Dec 08, 2014 2:58 pm
Full Name: Steve Krause
Contact:

Re: About to insert dedicated backup NAS into existing setup

Post by skrause »

Unless the devices you are planning on using for a landing zone are able to expose a CIFS share, you will need to have a server to act as the repository server for Veeam which is connected to those SANs.

To do Direct SAN access mode you would need a proxy server that has host mappings to your production storage to read (and restore).

There is no reason why those could not be the same box.
Steve Krause
Veeam Certified Architect
rreed
Veteran
Posts: 354
Liked: 73 times
Joined: Jun 30, 2015 6:06 pm
Contact:

Re: About to insert dedicated backup NAS into existing setup

Post by rreed »

My LZ is a SAN, and I just now realized I misspelled it on the subject. Crap. LZ is a SAN, not NAS (sorry), and I can present our production storage to our proxies as one normally would for SAN access. Would the VMware hosts also need mappings to my LZ SAN, or just the proxies?
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
skrause
Veteran
Posts: 487
Liked: 106 times
Joined: Dec 08, 2014 2:58 pm
Full Name: Steve Krause
Contact:

Re: About to insert dedicated backup NAS into existing setup

Post by skrause »

Your LZ SAN would not need mappings to anything other than the server(s) running as direct SAN Proxies which would also need to be configured as repository servers. These servers running the Veeam proxy and repository roles would be doing all of the writes/reads from your LZ storage. They would also be the source for the backup copy jobs going to your DataDomain.

I am assuming you are going to have these be physical machines? I have heard people talk about running them as VMs using the Windows iSCSI driver inside the guest to connect to the SAN fabric but most people I have talked to are not a fan since that adds an additional layer of complexity and point of failure for storage traffic.
Steve Krause
Veeam Certified Architect
rreed
Veteran
Posts: 354
Liked: 73 times
Joined: Jun 30, 2015 6:06 pm
Contact:

Re: About to insert dedicated backup NAS into existing setup

Post by rreed »

Proxies are VM's. The only physical server we have is our tape server.

Alright so my proxies would basically each need an E: drive mapped to what would become typical datastores on my LZ SAN and used as local repositories. Basically bust up my LZ SAN into LUNs to which each of my proxies will have a slice of the pie as their own individual repositories (boy v9 is going to be great to handle that part)? In short, my proxies just write backup data to themselves as repositories (being the LZ SAN)?

For pulling my VM backup data from production SAN, that's the part where my source SAN needs to have its LUNs presented to my proxies w/ iSCSI initiator per normal SAN mode setup, correct? So in short I setup my source production SAN for typical SAN mode, but my LZ SAN just gets mapped as datastores and additional HDD's on my proxies?

Just doing this at all is going to add an additional layer of complexity as well as point of failure being old SANs that management might be reluctant to renew support on. My contingency plan for that is effectively keeping my old oringinal backup jobs that pipe VM's straight to the dedupe boxes, so if/when my LZ SAN fails I can disable my B2D2Dedupe job, re-enable my old B2Dedupe job, and at least be able to continue backing up while I figure out what to do to recover from LZ failure. I'm effectively "the" backup engineer here now so wrangling all this junk together is my job.
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
skrause
Veteran
Posts: 487
Liked: 106 times
Joined: Dec 08, 2014 2:58 pm
Full Name: Steve Krause
Contact:

Re: About to insert dedicated backup NAS into existing setup

Post by skrause »

Typically the reason to use Direct SAN is to eliminate the impact of backup traffic on your production systems. If you are doing Direct SAN on proxies that are VMs and setting up your repositories as VMs on the production environment you are actually adding to the load on your production systems pretty substantially. Thus why the best practice for Direct SAN is typically to have a physical server that has mappings to the production SAN that then either has some kind of direct attached storage or sends to another repository (server or dedupe appliance) over the network. It also means that if you production VMware environment gets hosed, it is much harder to restore as you need to recreate the paths to the backup infrastructure portions before even attempting any restores.

You don't need to have every proxy also be its own repository as well if you are running them as VMs.

The way I would set it up, with what you have described as your setup is this:

VM proxies using Hot-Add mode (you will probably actually get similar performance as direct SAN through an in VM iSCSI connection).
Add an iSCSI nic to your Tape server and map the LZ SAN to it, creating however many Windows volumes you like, then set that up as your LZ repository in Veeam.
Create backup copy job from new Repository/Tape server to Dedupe appliance.
Steve Krause
Veeam Certified Architect
rreed
Veteran
Posts: 354
Liked: 73 times
Joined: Jun 30, 2015 6:06 pm
Contact:

Re: About to insert dedicated backup NAS into existing setup

Post by rreed »

Being a M-F, 8-5 business we're not too concerned w/ nightly network traffic. We're 10Gbps here for the most part and I'm happy to wail on it. How does setting up VM proxies for direct SAN access add substantial load to our system? Bear w/ me here please, just trying to learn.

If I follow you correctly, as I understand I would connect my LZ SAN exclusively just to my physical tape backup server as its own storage, which of course becomes repository(ies). Point my back jobs to the tape server's new repositories as LZ, then copy jobs from there to dedupe/tape?
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
skrause
Veteran
Posts: 487
Liked: 106 times
Joined: Dec 08, 2014 2:58 pm
Full Name: Steve Krause
Contact:

Re: About to insert dedicated backup NAS into existing setup

Post by skrause »

rreed wrote: If I follow you correctly, as I understand I would connect my LZ SAN exclusively just to my physical tape backup server as its own storage, which of course becomes repository(ies). Point my back jobs to the tape server's new repositories as LZ, then copy jobs from there to dedupe/tape?
Yes.


As far as the performance impact, when you use an iSCSI initiator from inside of a VMware guest, you have to connect it through a guest network in your vswitch that is connected to that VLAN where your iSCSI lives since you cannot access vmkernel interfaces directly from inside a guest. So depending on how your host networking is set up, you could be adding a quite substantial amount of traffic to network interfaces that do not have storage traffic travel across them normally. If you use dedicated network interfaces for your iSCSI traffic on your hosts to connect to your production SAN this would be potentially an issue. I guess you could create a separate port group on your vSwitch for those in-guest iSCSI networks and map them to the same uplinks you use for your production storage traffic vmkernel interfaces. But you are still passing iSCSI traffic through the vmware tools network driver in windows.

If you don't care about backup windows so much, then the potential performance gain from using DirectSAN vs HotAdd is probably not going to matter anyway and it adds in a lot more potential headaches come restore time, thus why I would just recommend using HotAdd.
Steve Krause
Veeam Certified Architect
rreed
Veteran
Posts: 354
Liked: 73 times
Joined: Jun 30, 2015 6:06 pm
Contact:

Re: About to insert dedicated backup NAS into existing setup

Post by rreed »

Sounds like excellent advice on both counts, skrause. A great many thanks. Sounds like i might be off-board on direct SAN access in our environment then and that's fine.

As I draw out hanging our LZ SAN off our tape backup server as local storage on a bar napkin here, I'm liking this new battle plan quite a bit. Many thanks again. I'm wondering if that might also help solve the throughput issue of backup copy job across the network to tape being so slow.
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
skrause
Veteran
Posts: 487
Liked: 106 times
Joined: Dec 08, 2014 2:58 pm
Full Name: Steve Krause
Contact:

Re: About to insert dedicated backup NAS into existing setup

Post by skrause »

My experience is with our tape jobs is that they are very slow when pulling off of a remote repository but are full speed when running off of backups located on the local repositories (our tape server is our repository server in that location). Also, if you are currently pulling your tape jobs off of the DataDomain you will certainly see an increase in tape throughput by shipping the jobs on the local storage to tape directly as there is no re-hydration needed.

Good luck.
Steve Krause
Veeam Certified Architect
rreed
Veteran
Posts: 354
Liked: 73 times
Joined: Jun 30, 2015 6:06 pm
Contact:

Re: About to insert dedicated backup NAS into existing setup

Post by rreed »

Same here, despite a combination of 10Gbps/1Gbps network connections, we're seeing at best 200-300Mbps throughput to our tape from remote repositories. I don't why but hoping they fix that in v9. Locally on the tape backup server, full speed of 1.2Gbps to our LTO-6 library. And yeah that's what we're doing right now, is having copy jobs pull off the dedupe devices to the tape server locally, then run a tape job against that to run at full speed.

A great many thanks for the suggestion again, will report back as we make progress.
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
Post Reply

Who is online

Users browsing this forum: No registered users and 72 guests