Comprehensive data protection for all workloads
Post Reply
jbsengineer
Enthusiast
Posts: 25
Liked: 3 times
Joined: Nov 10, 2009 2:45 pm
Contact:

Processing times on large enterprise guests

Post by jbsengineer »

Is anyone backing up large (Exchange, Fileservers, etc) guests in excess of 1.0TB that are very active? Even with change block tracking I'm wondering what I can expect with processing times. I notice on small VM's (we have yet to expand our backups to the enterprise systems) that contain a few GB's of changes daily have a processing rate of 65MB/s. Why with CBT does it take so long?
Gostev
Chief Product Officer
Posts: 31806
Liked: 7300 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Processing times on large enterprise guests

Post by Gostev »

Josh, it is not the CBT that takes long, but the fact that you have to process and write much more data to the target storage. See here for good the explanation why it takes longer to process VMs with large amount of changes.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Processing times on large enterprise guests

Post by tsightler »

I guess it's based on what you consider "large" and what you consider "active". We currently backup 43 VM's of various sizes and activity, total size of all VM's, just under 7TB. In our experience, the bigger the VM, the better performance, except for highly transactional systems like Exchange or busy SQL servers (either Oracle or Microsoft). Here are some examples:

1.2TB Fileserver VM -- Approx 250-300 moderate users, 10-15 Heavy users -- performance is generally 400-600MB/sec, although on weekends or holidays when it get's few changes, 2-3GB/sec are reported.

350GB Exchange VM -- ~550 Users -- >6 million messages -- Heavily archived/stubbed -- performance is generally 25-50MB/sec

500GB Email Archive Server -- >6 million messages -- almost 3 million compressed/deduped attachments -- highly transaction SQL database for message and attachment indexing -- performance is generally 50-85MB/sec.

20GB Environmental Monitory system -- System does practically nothing all day except write to a single file -- performance 40-65MB sec.

100GB MS SQL Server -- multiple databases that are lightly used -- performance 300-500MB/sec.

275GB MS SQL Server -- Highly transactional, heavily used database with large objects -- performance 50-75MB/sec, except on weekend after a database reorg, then 10-15MB/sec.

As you can see, it's based heavily on the particular use pattern for the server. That being said, smaller VM's will never see the performance of larger VM's, even with CBT, because, as far as I can tell, the processing rate reports the entire time the backup ran, including all of the overhead of checking the VM, taking the snapshot, and removing the snapshot. For many smaller VM's the "overhead" is 2-3 minutes and the backup time is only 6 minutes, so the reported "performance" is cut in half. Also, there seems for a larger VM, it's almost guaranteed that a smaller percentage of blocks will need to be backuped up, as a larger server will likely have more empty space, and more "static" space (disk space containing files that do not change) so the relatively changes are smaller, at least, that's what we generally see with our larger file and application servers. The only exception to this that I've seen is highly transactional servers like Exchange and busy SQL servers.
jbsengineer
Enthusiast
Posts: 25
Liked: 3 times
Joined: Nov 10, 2009 2:45 pm
Contact:

Re: Processing times on large enterprise guests

Post by jbsengineer »

Good post.

Thanks for the examples. I guess my concern is mainly our 12TB of Exchange data (10,000 users - 10 VM's) and 25TB of file servers (roughly 25 file servers). Everything is being backed up via BE right now but looking into the future we hope to leverage CBT. However the processing times of your particular Exchange server have me a little concerned. What directly affects the processing time? Compression? De-dupe? For instance your Exchange server there might be 50-75GB's of changed blocks I would hope it could do better than 25-50MB/s directly off the SAN. I'm trying to find the bottleneck in that situation before we architect the backups, VEEAM server, etc. For instance if the De-duplication is adding a significant overhead we might build the environment around no De-dupe.

We are a University and 5 of our file servers (5x 800GBs) contain profiles and home directories for about 20,000 students. So you can imagine the amount of small changed blocks that could occur after 1 day. Which is another concern...

We also plan on leveraging VEEAM replication in this environment. Hopefully we are not being too aggressive.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Processing times on large enterprise guests

Post by tsightler »

I'd be really concerned about the Exchange servers, how many total servers/datastores do you have for that many users/that much data? Here is my theory on why Exchange is so slow compared to, it's not 100% proven, but it's my theory based almost exclusively on observation:

Exchange is highly transactional. The datastore format is prone to updating many blocks spread across the entire allocated EDB file. There are constant, very small block changes on a busy server as messages are marked read, deleted, maintenance clean the space, organize pages, etc, etc. Because these changes are so evenly spread across disk and are so small, this works against Veeams block processing method, which uses relatively large pages (1MB). Even if a simple flag is flipped in a block, on the next backup, Veeam has to copy and process the entire 1MB block. So, in our environment, we typically see Veeam processing more than half of the blocks on our Exchange server after a busy day. But certainly processing half should be better than processing all the data correct?

Well, not really. Take our example, we have a 350GB Exchange server, which, after the initial Veeam backup with optimal compression, is 170GB. That means to make the initial full backup, the Veeam server had to read and write a total of 520GB (350GB+170GB), however, obviously the 350GB were read from one disk, and written to the backup disk. Now what happens on the next night with a backup pass? Veeam will have to read the changed blocks from the Exchange server, roughly 175GB, but then it will also have to read the changed blocks from the backup storage (~87GB read), write those blocks to the rollback file (~87GB written) and finally, write the new data to the VBK file (~87GB written). So basically, for the incremental, Veeam had to process slightly less data total, but a much higher percentage of the data transfer is on the target storage, which in most cases is a slower, second tier of strorage, and the pattern is a read, write, write, which causes a lot of IOP overhead. The target storage becomes the bottleneck. I believe this is why our Exchange backups run faster for the initial full run faster than our incrementals.

I'm really hoping that the new Veeam "tape-friendly" backups might help with this, because, based on how it's implemented (I have no idea how they'll actually do it), it should have less overhead on the target storage for the nightly incrementals since it will simply write the new data to a new file rather than the current method of moving the old data in the existing VBK to a VBR file and then writing the new blocks to the VBK file.

Still, 12TB of Exchange data is a lot of data and will have to be carefully designed, add to that the 25TB of file data, and I believe I'd be looking at a product that was opimized for this size environment, especially for the replication piece. Veeam is great, but simply has too much overhead for such a design, it would likely transfer 1-2TB every night based on it's large block sizes, maybe more. Are you currently using Veeam at all now on any large servers just to get an idea of how long it will take and how big the incremental passes are?
Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 264 guests