Comprehensive data protection for all workloads
Post Reply
Yuki
Veeam ProPartner
Posts: 252
Liked: 26 times
Joined: Apr 05, 2011 11:44 pm
Contact:

Reverse Incremental read/write operation details

Post by Yuki »

Hi guys,
I'm contacting NAS support and will probably contact Veeam support on this - we are still seeing slow write performance on reverse incremental on the 12 and 16 disk NAS devices in use (right now as low as 4MB/s). It just took 5.5 hours to backup 68GB of data on the file server.

Data:
Processed: 969GB
Read: 68.8GB
Transfered 46.8GB

Summary:
Duration 7:31:29
Bottleneck: Target
Processing rate 37MB/s

From the job:
Hard Disk 1 (60 GB) 756MB read at 3MB/s [CBT] - 0:3:47
Hard Disk 2 (2.0 TB) 67.3GB read at 4MB/s [CBT] - 5.:25:39




Write performance during active fulls and forward incrementals is normal, so we are trying to see if the NAS vendor can suggest any optimizations as we assume that's where the problem is. Can someone explain me the exact process of reverse incremental creation so i can pass it onto their tech support and see if they can simulate/replicate the speeds we are seeing?
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Reverse Incremental read/write operation details

Post by tsightler »

This has been pretty well explained in other thread, but here's a quick summary. For each changed block the new block is copied into the VBK, while the old block is read from the VBK and then written to the VRB file. That means, instead of 1 write I/O like for active full or forward incremental, you have 2 write I/O and 1 read I/O for every block. For 68GB of changed you would have to move 3x that much data, or 204GB. Not only that, but instead of being sequential I/O it's random I/O, which is normally around 3-5x slower than sequential I/O. So if random I/O is 3x slower, and I have to move 3x as much data, guess what, we're talking 9x slower backups with reverse incremental, assuming the target is the bottleneck, possibly more.

NAS devices make this worse because of the fact that all these blocks have to be read/written via the host networking stack, increasing latency of each of these I/Os. Jumbo frames can help with some NAS devices and a handful have some cache tuning options, but most are just software devices with no BBWC so performance for reverse incrementals is not going to be very good for any case.

Probably the best tool on earth for estimating your reverse incremental performance is the excellent Disk Drive RAID Configuration Tool. You can pick the drives that your NAS has (or manually enter their statistics), select the RAID configuration and stripe size, and it will give you a good estimate of the maximum sequential and random IOPs and throughput you can expect from the setup.

For example a 12 drive RAID five using Seagate Barracuda 7200RPM SATA disks in a RAID5 setup with a 128K stripe size and a similar request size, with a 33/67% read/write mix (basically, the profile of a reverse incremental) can, at most, be expected to deliver ~34MB/sec, and that's a theoretical maximum, without the overhead of NAS, which is significant. Stripe size can also have a massive impact on expected performance. For example, change the stripe size to 64K and suddenly the maximum throughput of this workload is only ~22MB/sec.

In your case you are only seeing 9-12MB/sec (the 3-4MB x3), which is indeed quite slow, especially if that was the only job running at the time. I don't know if you're using SATA drives, or how the RAID stripe is configured in your case, I'm just pointing out the places to look for both calculating your theoretical maximum, and then you can work with the vendor to see how close you can get.
Yuki
Veeam ProPartner
Posts: 252
Liked: 26 times
Joined: Apr 05, 2011 11:44 pm
Contact:

Re: Reverse Incremental read/write operation details

Post by Yuki »

Well, even if that was the case - on occasion we see that the individual disk is processed at 50MB/s, most of the time it is around 12-16MB/s for reverse incremental, but last night's was crawling at 3-4MB/s. An active full rolls at 80-105MB/s.

I would expect a performance drop, but it would expect that drop to be consistent throughout the week.
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Reverse Incremental read/write operation details

Post by tsightler »

You have to compare not just the speed, but the amount of data that was read/transferred. On the nights that were 50MB/s was that this very same backup job? Did those nights also transfer 68GB of data? Please feel free to share the stats from some of the other nights.
Yuki
Veeam ProPartner
Posts: 252
Liked: 26 times
Joined: Apr 05, 2011 11:44 pm
Contact:

Re: Reverse Incremental read/write operation details

Post by Yuki »

Ok, i've just looked at one of the previous runs (as of a week ago). Same job, same host, same target.. etc

Summary:
Duration 2:08:25
PRocessing rate: 612MB/S
Bottleneck: Target

Data:
Processed 4.5TB
Read: 279.9GB
Transfered: 229.8 (1.2)

Job excerpt:
Hard disk 4 (2.0 TB) 267GB read at 57MB/s - 1:19:53

Question - why the difference? We would be ok even with jobs running at 25MB/s... but 3-4MB/s is just way too slow.
tsightler
VP, Product Management
Posts: 6009
Liked: 2843 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Reverse Incremental read/write operation details

Post by tsightler »

It's very hard to tell just looking at snippets of jobs. How long was the chain on the first run to the second one, were there other jobs running concurrently? If you're seeing that much change each day then fragmentation of the VBK may play a role. Perhaps the run last week was mostly "new" blocks (which don't require rollback) while the second job was replacing blocks. That's why it's difficult to troubleshoot issues like this in the forum. The problem needs to be looked at from a holistic view of the entire environment, not just looking at one point and comparing it to another, it makes it impossible to see patterns and try to draw accurate conclusions.
Yuki
Veeam ProPartner
Posts: 252
Liked: 26 times
Joined: Apr 05, 2011 11:44 pm
Contact:

Re: Reverse Incremental read/write operation details

Post by Yuki »

Actually, I'm quite sure that's what it was - new blocks with data (previously empty space) in the job that processed fast. I can tell because i know how much space is used on the server and i know that new data is being added to certain VMDK file when people save files. The slow processing ones are the VMDKs that are full (lets' say a 2TB disk full of data) and that data changes throughout the day.

Good point...
Post Reply

Who is online

Users browsing this forum: ante_704, Google [Bot], mibrown9954, mrmccoy007 and 308 guests