-
- Enthusiast
- Posts: 36
- Liked: 5 times
- Joined: May 29, 2018 1:06 pm
- Full Name: Jeff Huston
- Contact:
Veeam V10 AWS metrics?
I'm looking at moving to AWS from Azure to leverage the immutability features of V10/AWS as well as the native object level tiring Veeam offers in V10. We currently use another hardware solution to move the data to Azure. We're sun-setting it very soon.
I'm looking for real life metrics around what a backup set looks like from a cost standpoint as well as any space savings you're seeing from scale out cloud object tiring. I've read the space savings can be pretty good but have not seen numbers anywhere on what that looks like. My full backup set is somewhere around 26TB a month but even if you have data around a smaller number, that would be greatly appreciated.
I'm looking for real life metrics around what a backup set looks like from a cost standpoint as well as any space savings you're seeing from scale out cloud object tiring. I've read the space savings can be pretty good but have not seen numbers anywhere on what that looks like. My full backup set is somewhere around 26TB a month but even if you have data around a smaller number, that would be greatly appreciated.
-
- Product Manager
- Posts: 14836
- Liked: 3082 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Veeam V10 AWS metrics?
Hello,
the space saving is about the what you get with REFS / XFS block cloning. So the main question is, whether your 26 TB are block-cloned REFS data or on a classic file system.
A rough estimation can be made by using the restore point simulator with REFS https://rps.dewin.me/
Please note, that there are also space savings with active full for object storage (same like for synthetic full with REFS)
Best regards,
Hannes
the space saving is about the what you get with REFS / XFS block cloning. So the main question is, whether your 26 TB are block-cloned REFS data or on a classic file system.
A rough estimation can be made by using the restore point simulator with REFS https://rps.dewin.me/
Please note, that there are also space savings with active full for object storage (same like for synthetic full with REFS)
Best regards,
Hannes
-
- Enthusiast
- Posts: 36
- Liked: 5 times
- Joined: May 29, 2018 1:06 pm
- Full Name: Jeff Huston
- Contact:
Re: Veeam V10 AWS metrics?
Thanks Hannes, My repo is a REFS repo running on Server 2016. I do see substantial space savings with this. I'll take a look at the simulator and see what I can come up with.
-
- Veteran
- Posts: 323
- Liked: 25 times
- Joined: Jan 02, 2014 4:45 pm
- Contact:
[MERGED] Estimating how many GBs of changes will be uploaded to S3?
We currently have Veeam configured to upload a monthly GFS to Azure storage for 7 year retention. Is there a way to estimate how much data (i.e. how many GBs of changes) will be uploaded to Azure each month by looking at the VBK file size in Veeam?
Example: I have a SOBR offload job running with 28 VMs, one of which is (19.3TB). Veeam is saying 4.4TB read at 15MB/s. How do I find out how much more data Veeam needs to upload?
Example: I have a SOBR offload job running with 28 VMs, one of which is (19.3TB). Veeam is saying 4.4TB read at 15MB/s. How do I find out how much more data Veeam needs to upload?
-
- Product Manager
- Posts: 14836
- Liked: 3082 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Veeam V10 AWS metrics?
Hello,
do the answers above help?
Best regards,
Hannes
do the answers above help?
Best regards,
Hannes
-
- Veteran
- Posts: 323
- Liked: 25 times
- Joined: Jan 02, 2014 4:45 pm
- Contact:
Re: Veeam V10 AWS metrics?
Thanks Hannes. Oddly enough, when I look at an ReFS volume, the VBK file size vs size on disk is the same. And I can indeed see in job history that fast cloning completed successfully. Am I missing something?
-
- Chief Product Officer
- Posts: 31802
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Veeam V10 AWS metrics?
That's normal. If a block is shared between multiple files, you can't really attribute it to any particular one, can you? So, you can only gauge actual savings at the volume level - and that's exactly where you need to be looking!
Who is online
Users browsing this forum: No registered users and 18 guests