Comprehensive data protection for all workloads
Post Reply
R.Boening
Novice
Posts: 5
Liked: 3 times
Joined: May 13, 2020 6:12 am
Full Name: Ralf Boening
Contact:

Optimizing performance of rotating disks

Post by R.Boening »

Hello everybody,
I'm in need of a best practice advice for using a rotating disk repository.
Here's the current setup:
B&R10 with Enterprise licence on a physical server, 2 ESX hosts, SAN, everything connected via fiberchannel.
The "usual" backup works fine, we are using incremental backups and everything works as expected.
The server is quit beefy with 16 cores (32 threads) and 32gb of RAM with a RAID (mechanical disks, no flash), so we usually use extreme compression and WAN optimization. With this settings the CPU utilisation is about 55% and the backups are nice and small. Bottleneck is the fiberchannel or the SAN storage, depending wether it's a full backup or incremental.

However, for legal and archival reasons we need a full backup every day for offsite storage. This storage is not just offsite but "offline", we need to put some kind of storage medium in a safe.
So right now we are using an external USB3-HDD to back up ~6TB. We created a new repository, configured it to "rotating disks", wrote a pre-run script to wipe the external disk to force an active full backup and called it a day. Everything works fine, except the transfer rates...

If I back up to internal RAID first and then use robocopy to copy files to USB-disk I get around 200MB/s write speed. Ressource Monitor reports 200MB/s writes with 0MB/s reads while copying. This method is not preferable because it eats up storage space on our backupserver and this whole procedure takes too long. Creating a full backup in some way and then pushing it to USB sometimes took too long, it has to be done within office hours.

If I back up to external USB-HDD directly everything should fit within office hours but I'm only getting around 120MB/s to 130MB/s AND 1 to 2MB/s reads. Those additional reads cause a massive write-performance drop on our external disk and I'm trying to figure out where that comes from.
Can I prevent or minimize those reads in any way? I'm using extreme compression, inline deduplication and WAN storage optimization for smallest possible backups but I think some of that is causing these disk-reads. Would it be a good idea to change some setting regarding inline deduplication or storage optimization?

The difference between 120MB/s or 200MB/s is kind of a big deal when transfering 6TB...

Thanks in advance for any suggestions.
PetrM
Veeam Software
Posts: 3626
Liked: 608 times
Joined: Aug 28, 2013 8:23 am
Full Name: Petr Makarov
Location: Prague, Czech Republic
Contact:

Re: Optimizing performance of rotating disks

Post by PetrM »

Hello Ralf,

I don't think that it is possible to get rid of these extra reads by changing some job settings like storage optimization or inline deduplication.
Basically, disabled inline deduplication may increase performance of incremental runs but keep in mind that larger backups will be produced as result.
On the other hand, it's quite complicated to estimate potential performance increase in every specific case, nevertheless it's worth testing if you're sure that you have enough disk space.

The only idea which comes to my mind about this permanent read during job execution is the reversed incremental backup chain when 1 read and 2 write operations are required for each data block.
If we're talking about simple forward incremental chain: I don't see any reason for this permanent read, you should contact our support team to explain what exactly triggers these read operations.

Thanks!
R.Boening
Novice
Posts: 5
Liked: 3 times
Joined: May 13, 2020 6:12 am
Full Name: Ralf Boening
Contact:

Re: Optimizing performance of rotating disks

Post by R.Boening »

Thanks for the reply.
This job is creating an active full backup every time, so I dont know where these reads come from either. Right now we are running the job over night, so we dont really care if it finishes at 3am or 9am. Having a lot of I/Os while using any kind of USB-Drive is a pain in the butt. But I cant think of any other way, it has to be dead-simple so even non-IT-workers can change the drive. Maybe I should try eSata if the server supports it. eSata is kind of dead but maybe it's better at handling I/Os.
At least VEEAM provides a very robust way to create a daily full backup. I still remember the time with multiple tapes a days and changing these tapes mid-day... forgot to change it or have to attend a long meeting? no backup for you.
I think the only real solution is to convince the management that daily fulls are not necessary. Maybe they'll set for a weekly full backup and server-based incremental jobs for everything else.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Optimizing performance of rotating disks

Post by tsightler » 1 person likes this post

The reads are most certainly because the Veeam backup format is not just a simple tar or zip style backup. When I'm describing the Veeam backup format I sometimes refer to it as a "database of blocks", with metadata to reference indexed blocks and this metadata must be updated as new blocks are added. Also, to guarantee consistency there are journal functions, etc. It's a lot more than just copying a file and this is optimal for the normal use case, where you take a full and then use some forever incremental strategy.

I notice you mentioned that you are using WAN storage optimization to "produce the smallest backup", but this will likely have almost no impact on a full backup size and creates 4x the metadata. Usually the difference between blocks sizes for a full backup is maybe a couple of percent at most, from the largest block size to the smallest, and is many times <1%. The smaller block sizes are mostly useful for reducing the size of incremental backups, because incremental backups can be much more granular with the changed blocks. The costs of smaller block sizes is more metadata and thus slower overall performance, even for the full.

I'd suggest trying the exact opposite, use the default or perhaps even largest block size, as doing so will lower the amount of metadata that has to be tracked in the backup file by 4x and 8x respectively. Hopefully this would impact the additional reads as well and maybe you'd get closer to that theoretical 200MB/s.

That being said, I generally agree with you, having to get a full off every day is not really a great solution and, eventually, you will hit the limit of this approach no matter what, so moving to some type of incremental approach is best.
R.Boening
Novice
Posts: 5
Liked: 3 times
Joined: May 13, 2020 6:12 am
Full Name: Ralf Boening
Contact:

Re: Optimizing performance of rotating disks

Post by R.Boening » 1 person likes this post

Thank you for the info about metadata and storage optimization, that's very useful indeed. Of course there has to be metadata somewhere, more blocks, more metadata.... please dont' tell anyone I didn't realise that myself ;)
This might even improve my backup at home (using the free agent and b&r community edition) because, as you know, home users usually back up to USB-disks.
Anyway, first tests show transfer rates of 155MB/s for "local target" and rates over 175MB/s for "local target (large blocks)"! Perfect, exactly what I was trying to achieve, thank you.
The job is still running so I don't know the impact for the backup size, but a few MB more won't hurt much considering the new transfer rates.

Now knowing how storage optimization influences I/Os and deduplication ratio I might have to re-tweak some backup jobs.
Thanks for solving my problem and teaching me something new about VEEAM.
R.Boening
Novice
Posts: 5
Liked: 3 times
Joined: May 13, 2020 6:12 am
Full Name: Ralf Boening
Contact:

Re: Optimizing performance of rotating disks

Post by R.Boening » 1 person likes this post

Update:
Backup size increased 1% for a full backup but transfer rates increased about 33% (local target, large blocks). This is a test-vm, but the production servers should be roughly the same. Thanks again for pointing me in the right direction.
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: Optimizing performance of rotating disks

Post by tsightler »

Thanks for sharing the results and glad the suggestion seems to be helping with your use case. Good luck!
Post Reply

Who is online

Users browsing this forum: AdsBot [Google], bambaleon, sandsturm, Semrush [Bot] and 251 guests