Comprehensive data protection for all workloads
Post Reply
christopher-swe
Service Provider
Posts: 21
Liked: 1 time
Joined: Dec 14, 2016 6:54 am
Full Name: Christopher Svensson
Contact:

Large incremental backups.

Post by christopher-swe »

We have just set up a new mail-cluster with 11 virtual servers running on VMware 6.5.
The majority of the servers are running Debian 9.

Our job is currently set to backup everything each 6th with a minimum of 600 retention points (Need to keep the backup for at least 6 month) with Incremental and one syntethic every week.
Storage option is set to default with compression at optimal and storage optimization to LAN-target.

The storage repository is a Windows 2016 Server (1607) with a 32TB ReFS volume formatted with 64KB block size running on a RAID-10 also with 64KB block size.
We have now reached about 200 retention points and our total backup size on the ReFS volume is at 35TB.

The problem we are facing is the hefty incremental backups ranging from 40-80GB every 6th hour. Is there a way to decrease this? I’ve read some post regarding the block size of CBT and the difference between the 1MB standard block size and WAN-target with 256KB. I tried with those changes as well, but it didn’t make a notable difference.
Our old mail-cluster was running on dedicated hardware and were backed with R1Soft CDP. For example, a backup of a storage node with 1.6TB data with ~370 retention points only took about 3TB. Is it possible to match something similar with Veeam?
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Large incremental backups.

Post by PTide »

Hi,

Well, it will be tricky to tell for sure what's the problem, since we are comparing two different clusters, and two different ways of backup operations (agent and agentless), and two different hardware sets (virtual vs physical).
We have just set up a new mail-cluster with 11 virtual servers running on VMware 6.5.
The majority of the servers are running Debian 9.
May I ask you how big is the total dataset on those servers so I can get the approx deduplication ratio?
The problem we are facing is the hefty incremental backups ranging from 40-80GB every 6th hour.
Would you elaborate on that please? I mean that I don't understand where the "6th hours" figure comes from. As far as I understand you run backup once a day, don't you?

Thanks
christopher-swe
Service Provider
Posts: 21
Liked: 1 time
Joined: Dec 14, 2016 6:54 am
Full Name: Christopher Svensson
Contact:

Re: Large incremental backups.

Post by christopher-swe »

10TB total about 5TB is in use. Not sure about the dedupe, veeam says 1,0x in the report, but around 3,0x in compression.

No, we run the backup every 6th hour
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Large incremental backups.

Post by PTide »

I guess this might be happening due to a lot of changes happening inside guests in 6hr interval. Do you have any estimates of how much changes actually occur to data on those VMs beetween backup sessions? Also, do you have deduplication enabled on your ReFS volume?

Thanks
christopher-swe
Service Provider
Posts: 21
Liked: 1 time
Joined: Dec 14, 2016 6:54 am
Full Name: Christopher Svensson
Contact:

Re: Large incremental backups.

Post by christopher-swe »

PTide wrote:I guess this might be happening due to a lot of changes happening inside guests in 6hr interval. Do you have any estimates of how much changes actually occur to data on those VMs beetween backup sessions? Also, do you have deduplication enabled on your ReFS volume?

Thanks
No, I have no data on that, sorry. But then again, I think there is a high activity. Since each mail goes to an inbox folder when it’s received and then gets moved to the current folder after it’s been read. And count one more if the customer deletes the mail.
So, I’m guessing at the worst-case scenario with LAN-target and 1024KB block size, each mail could take up as much as 3MB?
We don’t have deduplication enable. We have other backups to the same storage who really benefits from the ReFS’s Fast Clone during syntactic full.

I guess there is no way to run both Fast Clone and Windows Deduplication?
PTide
Product Manager
Posts: 6551
Liked: 765 times
Joined: May 19, 2015 1:46 pm
Contact:

Re: Large incremental backups.

Post by PTide »

Since each mail goes to an inbox folder when it’s received and then gets moved to the current folder after it’s been read. And count one more if the customer deletes the mail. So, I’m guessing at the worst-case scenario with LAN-target and 1024KB block size, each mail could take up as much as 3MB?
Provided that CBT block is 1MB, I'd say total changes would hit 2MB in the worst case - "delete" operation does not actually write zeroes on the guest filesystem, unless you've specifically configured it to do so.
I guess there is no way to run both Fast Clone and Windows Deduplication?
That's correct.

Thanks
christopher-swe
Service Provider
Posts: 21
Liked: 1 time
Joined: Dec 14, 2016 6:54 am
Full Name: Christopher Svensson
Contact:

Re: Large incremental backups.

Post by christopher-swe »

Thanks for the information and help.

I did some digging on our old backup solution and apparently R1Soft CDP uses 4KB block size. So, I’m guessing that's why the incremental backups aren't as large.
Post Reply

Who is online

Users browsing this forum: AdsBot [Google], Majestic-12 [Bot] and 47 guests