Hi Guys.
We've done some testing with blob on Azure using Tiering, so far so good.
I'm just wondering about de-dupe and how it works at a container level with backups.
As an example, we have a customer with about 10 servers, 2 of them are file servers with DFS syncing a large 2TB file share. If we back up these both up we essentially have a 4TB backup. If we push this to the *same* container on Azure, even though these are two different backup sets (separate sites), will de-duping happen in this scenario, and what can we expect?
Is there a best practice around this to get maximum de-dupe?
-
- Service Provider
- Posts: 178
- Liked: 13 times
- Joined: Apr 20, 2013 9:25 am
- Full Name: Hayden Kirk
- Contact:
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: blob storage questions
No, deduping will not happen in this case. Deduping between VMs works within the same job only, when you don't have per-VM backup files enabled. Thanks!
-
- Service Provider
- Posts: 178
- Liked: 13 times
- Joined: Apr 20, 2013 9:25 am
- Full Name: Hayden Kirk
- Contact:
Re: blob storage questions
Thanks for this. Is there a better way of handling DFS servers? The data is essentially being duplicated.
-
- Chief Product Officer
- Posts: 31748
- Liked: 7251 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: blob storage questions
What about excluding duplicate folders from backup of one of the VMs? Sounds as much cleaner approach comparing to creating duplication first, and then dealing with it later
-
- Service Provider
- Posts: 178
- Liked: 13 times
- Joined: Apr 20, 2013 9:25 am
- Full Name: Hayden Kirk
- Contact:
Re: blob storage questions
because that's too easy thanks
Who is online
Users browsing this forum: No registered users and 9 guests