-
- Enthusiast
- Posts: 38
- Liked: 17 times
- Joined: Mar 21, 2017 11:25 pm
- Full Name: Jon Rhoades
- Contact:
Compression - is it working?
Hey,
We recently moved our main file server from a Linux VM with ZFS LZ4 compression and a VM Snapshot Veeam backup. Now we have a Windows failover cluster with an uncompressed NTFS data volume protected by a Veeam Windows agent backup. Using a "Dell" ML3 LTO Tape Library, SAS connected to the backup server. Using a Linux XFS repo for the disk backup storage.
The old server had 29TB of data, the volume was 33TB and Veeam said it processed 33TB for the disk backup. When it went to tape, Veeam transferred 29TB and used 4xLTO8 tapes - so it all kinda made sense.
New server has 34TB Data, the volume is 60TB and Veeam says it processes 34TB for the agent (volume level backup). When it goes to tape, Veeam transferred 57TB and uses 5 x LTO8 tapes - so transferring ~20TB more than expected. Annoying for the extra tape and the extra 26 hours of backup time!
I have done one with compression "off" for the job and one with it "on" and it is the same amount transferred regardless. We do have Inline data reduction and "optimal compression" on. The logs for the backup job show "Job Options: 'Hardware Compression: True" and I can't see any other errors for the job.
I don't expect to get 2:1 compression as promised by LTO, but I also don't expect the tape backup to be bigger than the source data. Surely it's not copying the empty space to disk? Am I doing something stupidly wrong here? Is there a way of seeing what's happening?
Thanks Jon
We recently moved our main file server from a Linux VM with ZFS LZ4 compression and a VM Snapshot Veeam backup. Now we have a Windows failover cluster with an uncompressed NTFS data volume protected by a Veeam Windows agent backup. Using a "Dell" ML3 LTO Tape Library, SAS connected to the backup server. Using a Linux XFS repo for the disk backup storage.
The old server had 29TB of data, the volume was 33TB and Veeam said it processed 33TB for the disk backup. When it went to tape, Veeam transferred 29TB and used 4xLTO8 tapes - so it all kinda made sense.
New server has 34TB Data, the volume is 60TB and Veeam says it processes 34TB for the agent (volume level backup). When it goes to tape, Veeam transferred 57TB and uses 5 x LTO8 tapes - so transferring ~20TB more than expected. Annoying for the extra tape and the extra 26 hours of backup time!
I have done one with compression "off" for the job and one with it "on" and it is the same amount transferred regardless. We do have Inline data reduction and "optimal compression" on. The logs for the backup job show "Job Options: 'Hardware Compression: True" and I can't see any other errors for the job.
I don't expect to get 2:1 compression as promised by LTO, but I also don't expect the tape backup to be bigger than the source data. Surely it's not copying the empty space to disk? Am I doing something stupidly wrong here? Is there a way of seeing what's happening?
Thanks Jon
-
- Enthusiast
- Posts: 38
- Liked: 17 times
- Joined: Mar 21, 2017 11:25 pm
- Full Name: Jon Rhoades
- Contact:
Re: Compression - is it working?
Also no encryption anywhere in the processes and the TapeHost.log says: ----------Drive params---------- Compression: true
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Compression - is it working?
Hello,
if the backups are compressed with Veeam, then enabling tape compression has little to zero impact.
Best regards,
Hannes
if the backups are compressed with Veeam, then enabling tape compression has little to zero impact.
how much data is stored on disk? Is it just one full backup, or also incremental backups? The "transferred" value... is that for the backup job, or for the tape job? The value should be the same for the backup job and the tape job. If not, what did support say about that (case number)?processes 34TB for the agent (volume level backup). When it goes to tape, Veeam transferred 57TB
Best regards,
Hannes
-
- Enthusiast
- Posts: 38
- Liked: 17 times
- Joined: Mar 21, 2017 11:25 pm
- Full Name: Jon Rhoades
- Contact:
Re: Compression - is it working?
It's doing the tape backup from a preceding synth full agent backup. The agent backup is processing 34.TB and transferring varying amounts whereas the tape job is processing and transferring 58TB.how much data is stored on disk? Is it just one full backup, or also incremental backups? The "transferred" value... is that for the backup job, or for the tape job? The value should be the same for the backup job and the tape job. If not, what did support say about that (case number)?
Just opened support case #05473850
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Compression - is it working?
Hello,
after checking the screenshots in the case, I just realized that you are backing up a failover cluster (as mentioned by you, but looks like I did not notice it)
The backup to tape job duplicates the data, because the cluster has two nodes. While not nice, that's how it works today. A workaround could be to use file-to-tape.
Best regards,
Hannes
after checking the screenshots in the case, I just realized that you are backing up a failover cluster (as mentioned by you, but looks like I did not notice it)
The backup to tape job duplicates the data, because the cluster has two nodes. While not nice, that's how it works today. A workaround could be to use file-to-tape.
Best regards,
Hannes
-
- Enthusiast
- Posts: 38
- Liked: 17 times
- Joined: Mar 21, 2017 11:25 pm
- Full Name: Jon Rhoades
- Contact:
Re: Compression - is it working?
I just got that response from support. I just don't get it. You have a feature for backing up Windows fail-over clusters which works fine, it doesn't backup up the standby node and produces a sensible backup size. Why on earth would a secondary backup job decide to duplicate the data?The backup to tape job duplicates the data, because the cluster has two nodes. While not nice, that's how it works today. A workaround could be to use file-to-tape.
File to tape isn't really an option as we want to do all tape backups from existing disk backups otherwise:
a) we are duplicating the backups to disk and will have to plan to not have them clash, which with a tape backup is hard due to tapes
b) a full backup session last for 30-40 hours which really sucks.
My only real option is to do a non cluster aware backup of the active node and hope it doesn't fail over otherwise I'm not going to be back'ed up.
Is this not an issue for other customers, are we the only people who do agent backups of fail over cluster with an GFS backup to tape?
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Compression - is it working?
Hello,
actually there is a second way... also backup-to-tape works, as long as a normal media pool is used and synthetic fulls are enabled on the backup job (which I read you have).
The GFS restore point for tape is generated on demand as "virtual synthetic full". Not a simple copy from the last synthetic full. I will check, whether V12 brings improvements here (because we have improvements for similar situations with the backup copy job in V12)
Best regards,
Hannes
actually there is a second way... also backup-to-tape works, as long as a normal media pool is used and synthetic fulls are enabled on the backup job (which I read you have).
The GFS restore point for tape is generated on demand as "virtual synthetic full". Not a simple copy from the last synthetic full. I will check, whether V12 brings improvements here (because we have improvements for similar situations with the backup copy job in V12)
Best regards,
Hannes
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Compression - is it working?
Update: I just got the information, that we plan to fix it in V12
Who is online
Users browsing this forum: No registered users and 21 guests