We are being presented with buying a Dell Data domain and then having EMC Avamar running on it. They sized our environment to 22TB full backup for all VMs. They showed us a graph of 5 years out and the data requirements are gigantic, and their solution is that we will get at least 35 to 1 dedupe with this system. They do weekly fulls etc. We would be retaining 2 weeks on premise but also GFS on the box.
They told me how with Veeam we would have to buy a huge repository as the dedupe is poor in comparison.
I explained to them my plan (and this is where I need you to chime in) is that we will use windows server 2016 with ReFS 64k block volume with Veeam. so the 22TB full will then only have say 2TB change rate incrementals each day, and on the weekend we do synthetic block clone full resulting in same size as daily incremental. With 2 weeks retention only, so we basically never need growth in space other than whatever change rate goes up to over time, plus if we add any new VM into the full backup making it slightly larger etc... So in my mind with Veeam we are only needing say 30-40TB and slow growth from there over time. Is that right?
Their minds were blown and they all acted like they have never heard of ReFS and that no large company is using it or veeam, and veeam is only good for smaller sized companies. They kept saying we need all this dedupe. They said they have lots of customers using veeam sending the data to a data domain unit and they needed the dedupe, (but my understanding is that then still requires dedupe as it can't do ReFS which I told them)
So to me, if I make a SOBR and use capacity tier to Wasabi and all my GFS needs are sent up there, and only keep 2 weeks for all systems locally, then am I correct in sizing concerns? I know we don't have exact numbers but bottom line is we won't be having 22TB files created each week because we will be using spaceless synthetic fulls.
Oh and it seems like for a gigantic company and software, Avamar requires "plug ins" or agents of some type for granular restores of sql/exchange and it can't even do AD. So to me that seems poor.
-
- Expert
- Posts: 186
- Liked: 22 times
- Joined: Mar 13, 2019 2:30 pm
- Full Name: Alabaster McJenkins
- Contact:
-
- Expert
- Posts: 193
- Liked: 47 times
- Joined: Jan 16, 2018 5:14 pm
- Full Name: Harvey Carel
- Contact:
Re: Avamar vs Veeam quick questions
Hiya Derek,
We're not huge, but we have around 200 some TB moving through Veeam just fine. I consider my shop pretty small compared to my contemporaries, so right out, I'd ignore the "small businesses only" comment there.
Second, tons of big companies don't just use ReFS, they rely on it. While it was a bumpy road, it's fine now aside from the RAM requirements. Reference architecture from both Veeam and EMC basically says that the EMC devices are secondary targets, despite what marketing material will say. I personally had a DataDomain on site for about 5years, and restores from it were a nightmare; we just offloaded using the Files Tab to a regular storage simply because the offload + restore was faster than the Restore from DataDomain direct. I've been told higher-end DataDomains are way better, but that means putting out the money for a higher end, so...
'
As for sizing ReFS, just be careful -- you __should_ have a smaller physical footprint, but SOBR can be dangerous as if you end up having to violate the placement policy for any reason, you lose block cloning afaik.
We've had a fine time following the reference architecture: fast primary for short term storage, archival long term (read: redupe), and the wonderful world of tape.
We're not huge, but we have around 200 some TB moving through Veeam just fine. I consider my shop pretty small compared to my contemporaries, so right out, I'd ignore the "small businesses only" comment there.
Second, tons of big companies don't just use ReFS, they rely on it. While it was a bumpy road, it's fine now aside from the RAM requirements. Reference architecture from both Veeam and EMC basically says that the EMC devices are secondary targets, despite what marketing material will say. I personally had a DataDomain on site for about 5years, and restores from it were a nightmare; we just offloaded using the Files Tab to a regular storage simply because the offload + restore was faster than the Restore from DataDomain direct. I've been told higher-end DataDomains are way better, but that means putting out the money for a higher end, so...
'
As for sizing ReFS, just be careful -- you __should_ have a smaller physical footprint, but SOBR can be dangerous as if you end up having to violate the placement policy for any reason, you lose block cloning afaik.
We've had a fine time following the reference architecture: fast primary for short term storage, archival long term (read: redupe), and the wonderful world of tape.
-
- Expert
- Posts: 186
- Liked: 22 times
- Joined: Mar 13, 2019 2:30 pm
- Full Name: Alabaster McJenkins
- Contact:
Re: Avamar vs Veeam quick questions
Thank you so much for that response, that really helps and puts me more at ease.
Regarding your comment on SOBR danger with the space savings lost etc, let me run this by you and see if you agree that we would still be fine. First off I would only be using SOBR out of necessity of capacity tier requiring it, but it would only be one actual server/storage.
With capacity tier (wasabi cloud) the local files on your repo for all GFS are tiny. So lets say we need to move to a new server or storage system underneath Veeam, my plan would be that we just switch out the hardware and import the configuration, and delete or disregard the old low retention backup chain (due to the loss of block clone meaning impossible storage size requirement) and start a new one immediately.
So overall we end up losing say 2 weeks of retention for all systems while that builds back up on a new chain, but all the long term GFS stuff is still there in the capacity tier hosted in the cloud and if we needed to restore an old file or files we can do that right away.
I have read about how you can't keep the space savings across storage and that is a huge issue to me, but as I understand it, it is a Windows/Microsoft limitation in ReFS and not a Veeam limitation.
Regarding your comment on SOBR danger with the space savings lost etc, let me run this by you and see if you agree that we would still be fine. First off I would only be using SOBR out of necessity of capacity tier requiring it, but it would only be one actual server/storage.
With capacity tier (wasabi cloud) the local files on your repo for all GFS are tiny. So lets say we need to move to a new server or storage system underneath Veeam, my plan would be that we just switch out the hardware and import the configuration, and delete or disregard the old low retention backup chain (due to the loss of block clone meaning impossible storage size requirement) and start a new one immediately.
So overall we end up losing say 2 weeks of retention for all systems while that builds back up on a new chain, but all the long term GFS stuff is still there in the capacity tier hosted in the cloud and if we needed to restore an old file or files we can do that right away.
I have read about how you can't keep the space savings across storage and that is a huge issue to me, but as I understand it, it is a Windows/Microsoft limitation in ReFS and not a Veeam limitation.
-
- Veeam Software
- Posts: 2097
- Liked: 310 times
- Joined: Nov 17, 2015 2:38 am
- Full Name: Joe Marton
- Location: Chicago, IL
- Contact:
Re: Avamar vs Veeam quick questions
Ultimately if you need dedupe, you can still purchase a Data Domain and use it with Veeam. Both Avamar and Veeam use Boost for source-side deduplication with Data Domain, and the dedupe rates are the same either way as it's controlled by the hardware.
For a Windows repo not only is there ReFS but now with Windows Server 2019 you can enable Windows deduplication on ReFS volumes giving even more data reduction. And if you only need two weeks of retention you'll be hard-pressed to get much benefit from any hardware deduplication appliance over what you can get with Windows natively.
Here's a tool for calculating repository sizing for non-dedupe appliances.
http://rps.dewin.me/
Joe
For a Windows repo not only is there ReFS but now with Windows Server 2019 you can enable Windows deduplication on ReFS volumes giving even more data reduction. And if you only need two weeks of retention you'll be hard-pressed to get much benefit from any hardware deduplication appliance over what you can get with Windows natively.
Here's a tool for calculating repository sizing for non-dedupe appliances.
http://rps.dewin.me/
Joe
Who is online
Users browsing this forum: No registered users and 62 guests