Comprehensive data protection for all workloads
Post Reply
MSoft
Influencer
Posts: 10
Liked: never
Joined: Oct 22, 2010 3:23 am
Contact:

Full & Incremental backup for offsite storage

Post by MSoft »

Hello,

I’d like some advice on the best way to perform full & incremental backups for offsite storage.

Our infrastructure includes:
- Dell PowerEdge 2900 server
- Storage consists of internal disks (8 x 300GB RAID-5) on server + Netgear ReadyNAS Pro (6 x 1TB disks RAID-5).
- ESXi 4.0 host OS on 2900 (6 VMs) - 1 datastore on server disks + 1 on ReadyNAS (need 2nd datastore as insufficient space on server).

Our legacy backup method uses Symantec Backup Exec - weekly full backup to USB disk + nightly incremental backup to tape. USB disk is connected to ReadyNAS USB port as ESXi 4.0 does not support USB pass-through to VM guest. The backup media (USB disk & tapes) is transported to an offsite location each day.

We want to follow the same backup cycle using Veeam Backup v5 - i.e. full backup weekly & incremental backup daily. At present the Veeam backup target is a folder on the ReadyNAS - the full backup file (.vbk) is manually copied to USB disk for offsite storage, but the incremental files (.vib) are not stored offsite as yet. Veeam backup file size is about 750GB for full and 40-100GB for incremental.

My questions are:
1. Backups are quite slow (20-30 hours for full & 5-10 hours for incremental). This is probably due, at least in part, to having both the source (ReadyNAS datastore) and target on the same NAS device. Any suggestions on a better configuration that is not too costly?
2. What is the best method to automatically get backup files onto removable media - i.e. full backup file (.vbk) to USB disk and incremental files (.vib) to tape? All Veeam backup files are written to the same folder and there does not appear to be an option to nominate different targets for full and incremental backups, which makes automating the copy process a little tricky.
3. Backup Exec has an option to encrypt backup media - how can we achieve this with Veeam?

Any suggestions for improving this process would be welcome.

Thanks,
Ries
Gostev
Chief Product Officer
Posts: 31460
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Full & Incremental backup for offsite storage

Post by Gostev »

Hi Ries,

1. Are you using virtual appliace processing mode? Is your Veeam Backup server good on CPU when the job is running? If yes to both, than it is hard to suggest anything else but getting a faster storage for your target. BTW, depending on ReadyNAS model you have today (which model is it?), it may be your bottleneck as well.
2. Using post-job script in the advanced job settings? You can search this forum for PowerShell script that picks up the latest backup.
3. We do not provide encryption functionality today, so you will need to use some 3rd party process (TrueCrypt etc.)

Thanks!
topry
Enthusiast
Posts: 49
Liked: 1 time
Joined: Jan 07, 2011 9:30 pm
Full Name: Tim O'Pry
Contact:

Re: Full & Incremental backup for offsite storage

Post by topry »

As Gostev suggested, the first thing I would look at is the Veeam VM.

Check the performance stats in virtual center for the Veeam VM and see what type of CPU usage and disk i/o and latency you have.
If you are pegging the capabilities for that system and the array, then increasing the performance of that system would be my first suggestion. If the array shows performance stats, check those as well. If however, your CPU usage during backups is not staying above 80% and your disk i/o is below the array potential, then try and find the bottleneck.

I've tested the backup target on the same controller and different controller from the source (small improvement on separate as expected and that increases as load on the source increases). I also tried MPIO, but actually saw worse performance (our target is an iSCSI volume on same array).

I first tested Veeam on a Win7x64 VM with 4 and 8 GB of RAM and then on 2008R2. There was a significant performance improvement under 2008 as a 2008VM can be configured with 4 vCPU vs 2 for Win7. (increasing RAM from 4 to 8GB made no difference as the system never used over 60% when set to 4GB).

As for the Ready-NAS, make sure your iSCSI targets are using different subnets. There are many articles on configuring iSCSI targets so that they are using multi-path, binding each physical NIC to a virtual and the vmnics to the iSCSI HBA. We currently have a different sub-net for every target and ESX set to use Round-Robin, with all iSCSI connections on dedicated switches. If your iSCSI connections are not on dedicated switches (or direct connect), use VLAN if you have managed switches - though if you have only a single ESX server, direct connect would give better performance and be a lot easier to configure and maintain.

Even using a good gigabit switch for iSCSI, I saw slightly better performance using CAT-6 direct connect. If the ports on the ReadyNAS do not auto-detect, you may need to use crossover cables.

We had considered a ReadyNAS for secondary storage, so I would interested in what type of performance you experience with it.
MSoft
Influencer
Posts: 10
Liked: never
Joined: Oct 22, 2010 3:23 am
Contact:

Re: Full & Incremental backup for offsite storage

Post by MSoft »

Thanks Gostev & Topry for your replies.

- Yes, I’m using virtual appliance processing mode.
- The VM running Veeam is Win2008R2 configured with 4 vCPUs & 8GM RAM - it is using 15-30% CPU & about 6GB RAM. Physical ESXi server is running at between 20-50% CPU.
- ReadyNAS model is Pro Business Edition (RNDP6310).
- Disk Read Latency average is 2.5 – 3.5 milliseconds.
- Disk Write Latency average is 0.067 milliseconds.
- ReadyNAS does not appear to provide any performance stats!
- ReadyNAS datastore is NFS – would iSCSI be better?
- How do I direct connect the ReadyNAS to ESXi?
- ReadyNAS is definitely slower than internal storage on host server, which has 15k SAS drives, but is acceptable for general purchase use. We notice performance difference when running large batch processing.
- Would using eSATA as a datastore improve performance over a NAS? SAN direct attached storage seems to be rather expensive.

Cheers,
Ries
topry
Enthusiast
Posts: 49
Liked: 1 time
Joined: Jan 07, 2011 9:30 pm
Full Name: Tim O'Pry
Contact:

Re: Full & Incremental backup for offsite storage

Post by topry »

MSoft wrote:Thanks Gostev & Topry for your replies.
- ReadyNAS datastore is NFS – would iSCSI be better?
- How do I direct connect the ReadyNAS to ESXi?
- ReadyNAS is definitely slower than internal storage on host server, which has 15k SAS drives, but is acceptable for general purchase use. We notice performance difference when running large batch processing.
- Would using eSATA as a datastore improve performance over a NAS? SAN direct attached storage seems to be rather expensive.
While I have read articles comparing Read/Write performance between the two claiming iSCSI performance is better in some areas, I have not tried comparative tests on the same hardware. I will defer this one to Gostev and others for using NFS as a Veeam backup target. If your Veeam VM is averaging less than 30% CPU usage, it sounds like the target or link to that target is the bottleneck. We average 80%+ CPU usage and 65% throughput on the iSCSI nic, with peaks in the low 90% with a similar Veeam VM config. When using direct connect our throughput was a little higher.

Direct connect suggestion was assuming iSCSI. If the ReadyNAS is only used as an NFS target, I have not tried that config, though from a hardware perspective it would be the same. Before doing that, I would suggest getting more input from someone using NFS as a target.
I quickly looked at the specs on the Pro 6 series. It appears it has two 1gb NICs? If you have both of these connected to your LAN and are running everything on the same subnet with unmanaged / layer 2 switches, then this likely is one of the limiting factors to your throughput.

To direct connect, run a Cat-6 cable directly from a port on the server to an iSCSI port on the ReadyNAS. It is likely the ReadyNAS will require you to use a cross-over cable that reverses TX/RX wires when not going through a switch. This is only practical when there is only one source/target pair - ie no other device will be accessing the iSCSI target. Should another server need access to the VMFS/NFS LUNs (vmotion/HA), or you want to access multiple iSCSI targets from the same source, you would need to connect via a switch. I'm not familiar with that ReadyNAS device so I do not know if you can dedicate both ports to iSCSI. Without a separate management port that may not be practical/possible. If you do, then would need to add another vmnic to your Veeam VM on the same subnet for access.

This is an article on configuring ESXi 4.x for iSCSI multi-pathing and jumbo frames: http://karciauskas.wordpress.com/2010/0 ... bo-frames/
There are several articles that discuss the pros/cons of jumbo-frames and the pathing options when using iSCSI. I recommend testing in your environment and see what works best.

As for eSATA vs iSCSI - When trying to compare performance on a disk array, there are a lot of factors, including the type of OS and applications that will be run on the VMs and if they will be predominantly read or write i/o intensive, the hba, disks, backplane, cache, controllers, RAID config, etc. While a single eSATA connection will provide more bandwidth potential than a single 1GB nic, how that equates to performance is going to depend on many other factors. While one to one, this would provide a better single connection, for this specific purpose - as a backup target for Veeam, I could only guess. Since my experience with Veeam is very limited and only as an iSCSI target, hopefully others that have utilized other configurations can provide some feedback. Just because the highway has 8 lanes vs 2, if the gating factor/bottleneck is slow disks or controllers, the higher bandwidth connection would not help.

All iSCSI devices are not created equal <g>. We have a box from Promise Tech that is called a 'SAN' but is really a NAS. While it has 2 - 1gb ports, there is no separate dedicated management port and it does not support multi-pathing and has but one controller and limited cache. Putting fast drives in this device would be a waste as the bottleneck is at the entry point (the nics and controller), not the drives themselves. In looking at the specs on the ReadyNAS, I think you have a similar situation.
Post Reply

Who is online

Users browsing this forum: No registered users and 174 guests