Host-based backup of VMware vSphere VMs.
Post Reply
pinkerton
Enthusiast
Posts: 82
Liked: 4 times
Joined: Sep 29, 2011 9:57 am
Contact:

CBT throughput slower than with Full Backup?

Post by pinkerton »

Hi,

we are using VBR6 with vSphere 5 in a Direct SAN access environment (one VBR server with all "roles" instlaled) and I am wondering that CBT transfer rates for Reversed Incremental backups seem to be much slower than transfer rates for Full Backups. In example, the following screenshot shows a full backup for a given virtual machine which runs at about 130-170MB/s:

Image

The subsequent (so it is incremental) backup however seems to be much slower with only 15-35MB/s:

Image

It was the same with VBR5. However, shouldn't the incremental backups run at the same rate as the full backup? Or is this due to the fact that incremental backups generate more load on the target server since the backups need to be injected into the existing VBK file? At least this is what the bottlenack indicator suggests and whould be logical.

And: Why do the realtime statiscs for the full backup (screenshot 1) show CBT as well? I thought CBT is only used for incremental backups?

Thanks
Michael
Gostev
Chief Product Officer
Posts: 31780
Liked: 7280 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: CBT throughput slower than with Full Backup?

Post by Gostev »

Hi Michael,

Random I/O throughput is always MUCH slower than sequential I/O, this is by hard disk design (random I/O means milliseconds of seek time, rotational latency - of course this hits raw throughput numbers very bad).

Also, please notice that in case of incremental run, your target storage is the performance bottleneck. Either it is being hit by multiple jobs at the same time, or you are using reversed incremental backup mode (which is very I/O heavy on target), or may be target storage controller settings are not optimally set for non-sequential writes (as this is the case with full backup).

CBT is in fact used with full backup, you can search this forum for more info.

Thanks!
pinkerton
Enthusiast
Posts: 82
Liked: 4 times
Joined: Sep 29, 2011 9:57 am
Contact:

Re: CBT throughput slower than with Full Backup?

Post by pinkerton »

Hi Anton,

thanks, I actually haven't thought of the sequenial/random influence. It however makes sense that with full backups we are only seeing sequential workloads (whole VMDKs being just copied in a way) whereas incremental backups generate random workloads (only changed data is picked up - which cannot be read and written in once piece). And we are indeed using reversed incremental backup with quite a (at least for random workloads) slow target (SATA, RAID6, 12 1TB disks). In practice anything is just fast enough though :)

Thanks
Michael
Gostev
Chief Product Officer
Posts: 31780
Liked: 7280 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: CBT throughput slower than with Full Backup?

Post by Gostev »

No problems. Take a look, I found these test results - blueish are hard drives, brownish are SSDs. See how massive the hit is that hard drives take going from 2MB sequential to 4KB random I/O. Should be useful for future readers.

By the way, notice that your incremental backup is still a few times faster than full (by wall clock), so don't worry about raw throughtput too much ;)

Image
pinkerton
Enthusiast
Posts: 82
Liked: 4 times
Joined: Sep 29, 2011 9:57 am
Contact:

Re: CBT throughput slower than with Full Backup?

Post by pinkerton »

Thanks, actually I've quite a lot to do with storage and also solid state drives, I just haven't thought of the impact here while being so blown away by the fact that our full backup with about 500GB transfers takes about 3 1/2 hours, whereas the incremental with only 30GB transfers takes about 1 1/2 hours. But yeah, random I/O is THE problem today; hopefully SSDs will find their way into enterprise storage systems (for acceptable costs) in the next years.

I actually think solid state drives will push virtualization a second time like x64 did.

It's funny by the way how everyone talks about CBT and incremental backups being that muuuuch faster. I mean, in theorey that's true, for sure. But in pracice everyone still needs to be aware of incrementals resulting in random workloads which in nature are much slower than sequential ones :)

Anyway, all clear now, thanks for getting me back to reality :)
Gostev
Chief Product Officer
Posts: 31780
Liked: 7280 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: CBT throughput slower than with Full Backup?

Post by Gostev »

Sounds good. However, I would like to emphasize again, that in your specific case, random I/O performance of the source storage is NOT a bottleneck. In fact, you still have nice "head room" there with the source storage according to the bottleneck statistics numbers. The primary bottleneck is clearly your target storage speed. In other words, even if you replace your source storage with SSDs tomorrow, you will see no change in backup performance - the issue is with target. I would experiment with RAID controller cache settings (setting writeback vs. write-through).
pinkerton
Enthusiast
Posts: 82
Liked: 4 times
Joined: Sep 29, 2011 9:57 am
Contact:

Re: CBT throughput slower than with Full Backup?

Post by pinkerton »

Yes, but replacing the target storage with SSDs would help :) But I think even though the target storage is slow, the main reason is that we are using reversed incremental. I can imagine that there this genreates really heavy load on the disk, as the existing file needs to be modified. So what I said above - about incremental backup speeds being reduced by random workloads - shouldn't count as much for the "normal" incremental backups.

Thanks for the hint with the cache, I'll have a look at the corresponding settings.
Gostev
Chief Product Officer
Posts: 31780
Liked: 7280 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: CBT throughput slower than with Full Backup?

Post by Gostev »

Going with RAID10 instead of RAID6, and write-back controller settings might be much cheaper though ;) just be sure to put this server on UPS, otherwise enabling write-back would not be a good idea at all.
pinkerton
Enthusiast
Posts: 82
Liked: 4 times
Joined: Sep 29, 2011 9:57 am
Contact:

Re: CBT throughput slower than with Full Backup?

Post by pinkerton »

But the downside is that RAID10 leaves only 6 out of 12 TB, whereas now we can use 10TB with RAID6. But anyway, as I said, in practice it's just no problem, I mean, backing up 30VMs in 1/2 hours is just great and it is MUCH faster than VMware Data Recovery :)
itdirector
Enthusiast
Posts: 59
Liked: 3 times
Joined: Jan 19, 2012 8:53 pm
Full Name: friedman

[MERGED] initial full replication speed versus subsequent CB

Post by itdirector »

Is it normal for the initial full replication transfer rate to be 184 MB/s & subsequent CBT replications are at ~24MB to ~30MB ?

Initial full replication: Hard Disk 2 (1.2 TB) 1.2TB read at 184 MB/s (CBT) 1:55:10
- Transfered ~900GB

Subsequent CBT replication : Hard Disk 2 (1.2 TB) 47.8GB read at 24 MB/s (CBT) - 0:34:32
Transfered ~48GB

I can delete the replication & recreate the above.

- Veeam B&R 6.0.0.181 (64-bit) VM on target server below - Win2k8 R2 64-bit - 4GB - 6 vCPU- Virtual appliance

- Source - vSphere 5 server on Dell R710 hardware - 32GB - Dual X5675 - Win2k8 Veeam Proxy VM (6 vCPU - 4GB) running Virtual Appliance transfer mode - 8 Gigabit nics - VM's are on DAS - Transfer rate to/from DAS is 700MB/s+
- Target - vSphere 5 server on Dell R710 hardware - 32GB - Dual X5675 - Win2k8 Veeam Proxy VM (6 vCPU - 4GB) running Virtual Appliance transfer mode - 8 Gigabit nics - Replicated VM's are on DAS - Transfer rate to/from DAS is 600MB/s+
hoFFy
Service Provider
Posts: 183
Liked: 40 times
Joined: Apr 27, 2012 1:10 pm
Full Name: Sebastian Hoffmann
Location: Germany / Lohne
Contact:

Re: CBT throughput slower than with Full Backup?

Post by hoFFy »

I'm experiencing nearly similar symptoms:
Doing an active full to the local storage of our VBR server with 6 SATA 3GB Drives, Raid 6, is running with a processing rate of about 170MB/sec, bottleneck is the network, because I've attached the backup server (old 4-core CPU) with one nic to each of our ESXi servers, which are both running a Server 2012 Core VM as a proxy with 4 vCPUs. So far, so good...
During my last tests two VMs failed, so I simply started a "Retry" and performance went down to 8MB/sec, target as a bottleneck (97%). But these two VMs are just Retrys of a failed Active full, so.... were should the random r/w come from?!

Before making this experience the same job has been slow with other reversed incrementals, too. But there has been no real bottleneck, all values are between 35 and 59%.
The target storage, a huge VMDK in one of our old servers, is showing no storage latency, no high CPU usage, no high network throughput...
I can't find any hint what the real bottleneck is.
VMCE 7 / 8 / 9, VCP-DC 5 / 5.5 / 6, MCITP:SA
Blog: machinewithoutbrain.de
foggy
Veeam Software
Posts: 21137
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: CBT throughput slower than with Full Backup?

Post by foggy »

Sebastian, what operations took longer on retry, according to the job session log?
hoFFy
Service Provider
Posts: 183
Liked: 40 times
Joined: Apr 27, 2012 1:10 pm
Full Name: Sebastian Hoffmann
Location: Germany / Lohne
Contact:

Re: CBT throughput slower than with Full Backup?

Post by hoFFy »

The performance when reading the vmdks during the retry

Performance during full backup:

Image

Performance during the retry

Image
VMCE 7 / 8 / 9, VCP-DC 5 / 5.5 / 6, MCITP:SA
Blog: machinewithoutbrain.de
foggy
Veeam Software
Posts: 21137
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: CBT throughput slower than with Full Backup?

Post by foggy »

Can you please select the particular VMs to the left and compare processing details for both job runs?
Post Reply

Who is online

Users browsing this forum: No registered users and 17 guests