Discussions related to using object storage as a backup target.
oleg.feoktistov
Veeam Software
Posts: 1912
Liked: 635 times
Joined: Sep 25, 2019 10:32 am
Full Name: Oleg Feoktistov
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by oleg.feoktistov »

Hi @ilovecats. I merged your post with the existing thread. Hope you find the answers above helpful. Thanks!
HannesK
Product Manager
Posts: 14287
Liked: 2877 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by HannesK »

please see the answers above. It sounds like you changed the block size which can result in 4x API costs.

The default values are optimized for a balanced price, yes. Larger blocks mean more storage costs (and egress costs for restore).

Also please check the FAQ. I updated it some weeks ago. post338749.html#p338749
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by Gostev »

ilovecats wrote: Jun 28, 2020 4:14 am I noticed in my Azure blob container that the files Veeams uploaded are not full .VIB files etc, but rather in its own format of folders and many small parts. Could this process have been too wasteful in terms of write operations?
It's the opposite: this approach reduces the number of PUT operation by only offloading data blocks that are not already present in the bucket. Let's say you have a backup file of 1GB in size (1000 blocks). Uploading the entire backup file would mean 1000 PUT operations, however uploading only block that are unique to this backup file will only require 100 PUT operations (assuming only 10% of blocks in the given backup file are unique).

The default block size setting in Veeam is well balanced between storage and API costs. You can potentially increase the block size 4 times in Veeam settings, which will reduce the API costs 4x. However, this will in turn increase your storage costs, due to incremental backups becoming on average 2x bigger.

I would say, if the API costs is the real concern for you, then the best solution would be to simply use a cloud object storage provider that does not charge for API.
akraker
Enthusiast
Posts: 45
Liked: 5 times
Joined: Feb 11, 2019 6:19 pm
Full Name: Andrew Kraker
Contact:

[MERGED] Azure blob large number of write operations every week

Post by akraker »

I have a scale out repository sending data to Azure Cool Blob storage. I am using the option to copy data over to the object storage when they are created. I am noticing that my Azure costs for write operations is higher than the cost to actually store the data. When I look at the graph it looks like every Sunday I have around 1.6 million transactions at 3PM. This is followed by a drop in the used capacity in Azure. I am guessing this is due to the removal of my weekly full backups being removed based on GFS retention? Is that considered a write operation? Any idea on how I can get the amount of write operations down or is that pretty normal?
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by Gostev »

Yet, it's an API operation. I guess this is also magnified by the fact that unlike Amazon S3, Azure Blob Storage does not support bulk delete API, making the retention processing slow and expensive.
akraker
Enthusiast
Posts: 45
Liked: 5 times
Joined: Feb 11, 2019 6:19 pm
Full Name: Andrew Kraker
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by akraker »

Thanks for merging my post to this one.
Is there any good way to curb these costs? I saw mention of a larger block size? Maybe I would be better off on Hot tier instead of Cool? Or maybe I can change my retention policy in some way to reduce the delete operations?
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by Gostev »

Hot tier might provide slightly better results from costs perspective. I didn't do the math for Azure, but this is certainly the case for Amazon. Using S3 IA there (analogous to Azure Blob Cool tier) does not make economical sense when copying all backups. We recommend to use it only when the Capacity Tier is configured in the way that only GFS backups on long-term retention are being offloaded to object storage.

Otherwise, there's not much you can do except changing your object storage provider to one providing bulk delete API, or not charging for API calls at all.
akraker
Enthusiast
Posts: 45
Liked: 5 times
Joined: Feb 11, 2019 6:19 pm
Full Name: Andrew Kraker
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by akraker »

Thanks for the insight. Any idea if I am able to convert my cool tier to hot without causing issues? Or do I have to create a new blob and restart the backup chain?
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by Gostev »

You should check with Microsoft if such conversion is supported. For Veeam, blob storage tier of the container makes no difference.
akraker
Enthusiast
Posts: 45
Liked: 5 times
Joined: Feb 11, 2019 6:19 pm
Full Name: Andrew Kraker
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by akraker »

Okay. I wasn't sure if that changed anything from Veeam's perspective of the blob storage. I know I read somewhere that tiering using Azures lifecycle management is not supported by Veeam so I was thinking maybe converting manually between cool and hot would cause issues as well.
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by Gostev »

There should not be any issues between hot and cold tiers because they are absolutely identical from API and functionality perspective. We support both, and we don't even "know" which one are we working with, because this is irrelevant for us. However, the same is not the case with the archive tier, which is totally different.
akraker
Enthusiast
Posts: 45
Liked: 5 times
Joined: Feb 11, 2019 6:19 pm
Full Name: Andrew Kraker
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by akraker » 1 person likes this post

I don't want to get too off-topic but if that is the case then why is lifecycle management not supported if I use Hot tier initially and let it cycle out to Cold tier? Is that because the data would actually need to physically move or change when doing the automated conversion?
akraker
Enthusiast
Posts: 45
Liked: 5 times
Joined: Feb 11, 2019 6:19 pm
Full Name: Andrew Kraker
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by akraker » 2 people like this post

I was actually able to figure out that it was not the delete operations I was accumulating costs for. it was the PutBlob operations accumulated. DeleteBlob is thankfully not counted. I used the Azure calculator and determined I was better off with Hot Tier for the amount of data I am storing.

The following API calls are considered Write Operations: PutBlob, PutBlock, PutBlockList, AppendBlock, SnapshotBlob, CopyBlob and SetBlobTier (when it moves a Blob from Hot to Cool, Cool to Archive or Hot to Archive).
dariusz.tyka
Enthusiast
Posts: 54
Liked: 4 times
Joined: Jan 21, 2019 1:38 pm
Full Name: Dariusz Tyka
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by dariusz.tyka »

Hi,

I'm testing now Amazon S3 as object storage. I have 2 jobs pointing to SOBR with copy/move data to Amazon S3. Immutability is set to 7 days.
From session report I can see that average data change per backup is 3GB for one job and 5GB for another. Both jobs are configured with local target as storage optimization so I have 1MB block size. Jobs are running for some time so I have around 500GB of data in S3 bucket already.
Strange thing is that for this month so from 01.07 till 10.07 I already have 580k of PUT, COPY, POST, or LIST requests for this S3 bucket.
Both jobs were executed 7x in July so multiplying it by 8GB (3GB+5GB) would be 56GB of changed data this month. There was a full backup last Friday but anyhow only changed blocks were uploaded to S3. 56GB would mean around 58k of 1MB blocks/requests. Of course there can be some more but not 10x more. There were no restore requests in July. Only backup copy/move to S3.

Is this normal?

Dariusz
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by Gostev » 1 person likes this post

Yes, this seems about right.

You're looking at the compressed data size - but as explained above, what matters is the source data size (number of 1MB blocks at source). With 20-30GB worth of source data blocks changed daily and 7 backups, it makes 150-200K source blocks to process, and so the corresponding number of WRITE operations. In addition, blocks that did NOT change in July had their immutability updated once to extend their object lock time. This gets you to the number you're seeing.

If you increase block size in Veeam to 4MB, then you will see 4x less API operations, but then incremental backups will become on average 2x larger, which means more storage costs. In other words, the storage vendor will "get you" either way ;) so across all considerations, 1MB blocks are optimal.
ChrisAnderson_2019
Influencer
Posts: 12
Liked: 1 time
Joined: Jan 28, 2019 9:05 am
Full Name: Chris Anderson
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by ChrisAnderson_2019 »

Hello,

Going through a similar cost "estimation" exercise. Just wanted to clarify "source data" is the source backup file, not the source VM disk?

Cheers.
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Veeam and Azure Blob Storage API calls

Post by Gostev » 1 person likes this post

It's the source machine's disk.
Post Reply

Who is online

Users browsing this forum: No registered users and 19 guests