-
- Enthusiast
- Posts: 75
- Liked: 3 times
- Joined: Jun 16, 2010 8:16 pm
- Full Name: Monroe
- Contact:
Local target (16 TB + backup files) / Questions
With regards to this setting:
Local target (16 TB + backup files) under Storage Optimization.
I was hoping to get some real-world input with regards to what others seeing when using this setting versus "Local Target". I am trying to determine if the speed increase is worth the size increase of the backups and incrementals afterwards.
1)It should be faster but has anyone done comparisons to know what kind of difference it makes in actual numbers. Does a 3 hour job reduce to 2 hours? Is it 15% faster? 10%? Etc
2)How much bigger is the main VBK and then the subsequent VRB's? For example, did a 2000gig VBK grow to be 2300gig? Did incrementals grow from 100gig to 150gig?
I am doing some tuning to our storage repostitories and whatnot and I am trying to determine if the speed increase is enough to offset the larger disk footprint. The backup copy jobs eventually get sent out to the cloud so having the files smaller helps in the time that takes. There is a balance and I dont want to create any major increases in the cloud copies.
I am doing some test jobs to see what I can learn but I wanted to see what kind of results others are seeing.
Thanks in advance..
MarkM
Local target (16 TB + backup files) under Storage Optimization.
I was hoping to get some real-world input with regards to what others seeing when using this setting versus "Local Target". I am trying to determine if the speed increase is worth the size increase of the backups and incrementals afterwards.
1)It should be faster but has anyone done comparisons to know what kind of difference it makes in actual numbers. Does a 3 hour job reduce to 2 hours? Is it 15% faster? 10%? Etc
2)How much bigger is the main VBK and then the subsequent VRB's? For example, did a 2000gig VBK grow to be 2300gig? Did incrementals grow from 100gig to 150gig?
I am doing some tuning to our storage repostitories and whatnot and I am trying to determine if the speed increase is enough to offset the larger disk footprint. The backup copy jobs eventually get sent out to the cloud so having the files smaller helps in the time that takes. There is a balance and I dont want to create any major increases in the cloud copies.
I am doing some test jobs to see what I can learn but I wanted to see what kind of results others are seeing.
Thanks in advance..
MarkM
-
- Influencer
- Posts: 11
- Liked: 1 time
- Joined: Dec 14, 2012 6:42 pm
- Contact:
Re: Local target (16 TB + backup files) / Questions
What difference it will make in terms of file size will be highly dependant on your data (and how the virtual drives you use are storing it), as with anything dedup. Identical large files will deduplicate very well, the dedup ratio for many small files under a certain size threshold will suffer since the deduplication block size will be too large to catch the similarities.
I don't think however that the main purpose of this setting is to offer a speed vs. backup size compromise, more like if you use the normal setting with backups that grow to 16 TB+ you may/will run into actual CPU / Memory limits due to the too large metadata table.
I don't think however that the main purpose of this setting is to offer a speed vs. backup size compromise, more like if you use the normal setting with backups that grow to 16 TB+ you may/will run into actual CPU / Memory limits due to the too large metadata table.
-
- Veteran
- Posts: 354
- Liked: 73 times
- Joined: Jun 30, 2015 6:06 pm
- Contact:
Re: Local target (16 TB + backup files) / Questions
If you're using a dedupe device as your repository, we did see an increase in recovery speed - this can become very important when the time comes - by switching to large blocks (16TB+) w/ ours. I don't have any numbers since it's been a while but seat of the pants feel when we test was that it made an improvement in restores. For us improved restore times from dedupe device was the purpose so we accepted whatever impact either size or backup time might happen.
VMware 6
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
Veeam B&R v9
Dell DR4100's
EMC DD2200's
EMC DD620's
Dell TL2000 via PE430 (SAS)
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Local target (16 TB + backup files) / Questions
Improvement in restore times from dedupe devices is indeed expected for the larger block. Basically, Veeam B&R makes much less requests to get the data (even though reading totally a bit more than with smaller block) and dedupe devices are typically able to retrieve a larger block in the same time they need to retrieve a smaller one.
-
- Influencer
- Posts: 18
- Liked: 1 time
- Joined: Nov 30, 2017 5:46 pm
- Full Name: Andy Perkins
- Contact:
Re: Local target (16 TB + backup files) / Questions
Is Local target (16 TB + Backup Files) Needed if you are doing per VM .vbk's? The biggest .VBK's we get are maybe 2 TB max. Catalyst + StoreOnce recommends the Local Target (16TB + backup files) when setting up the job. What about setting up a CIFS on a dedup appliance like a StoreOnce? Would Optimization & Compression levels still apply the same as it were with Catalyst or some other integratin?
-
- Chief Product Officer
- Posts: 31806
- Liked: 7299 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Local target (16 TB + backup files) / Questions
No, you don't need to use this setting, just stick with the default.
Who is online
Users browsing this forum: Bing [Bot], diana.boro, Semrush [Bot] and 139 guests