Discussions specific to object storage
-
mcz
- Expert
- Posts: 308
- Liked: 58 times
- Joined: Jul 19, 2016 8:39 am
- Full Name: Michael
-
Contact:
Post
by mcz » Jan 25, 2019 11:22 am
this post
Anton, I've now attached object storage (azure) to our SOBR and it is beautiful
BUT: Immutable storage does conflict with the current veeam implementation... Would you like to discuss this topic with your collegues from product management?

-
veremin
- Product Manager
- Posts: 17055
- Liked: 1473 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
-
Contact:
Post
by veremin » Jan 25, 2019 1:39 pm
this post
No support for immutable storage at the moment, but thanks for the feature request!
-
Gostev
- SVP, Product Management
- Posts: 25094
- Liked: 3673 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
-
Contact:
Post
by Gostev » Jan 25, 2019 4:23 pm
3 people like this post
It is unlikely that immutable storage will ever be supported for Capacity Tier due to its WORM nature. There are just too many issues around "online storage" use case with most basic stuff that you don't immediately realize, like inability to cleanup incomplete or failed writes even.
However, we will certainly consider immutability for possible future Archive Tier, which is a different use case from Capacity Tier that will likely have vastly different integration approach - due to Glacier-type storage peculiarities and those issues enabling immutability brings. Which also means that will lose most of the benefits of Capacity Tier, and it will no longer be "beautiful"

limiting you to true archival scenarios (essentially making it an off-site tape archive replacement).
-
dimaslan
- Service Provider
- Posts: 53
- Liked: 7 times
- Joined: Jul 01, 2017 8:02 pm
- Full Name: Dimitris Aslanidis
-
Contact:
Post
by dimaslan » Jan 30, 2019 6:06 pm
this post
Not sure if this was answered already, I was just wondering since the Azure blob Archival storage is charging for delete operations as well, won't there be a charge when, let's say in 13 months from now you go and send the new monthly backup and have to delete the oldest one?
Thank you.
-
veremin
- Product Manager
- Posts: 17055
- Liked: 1473 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
-
Contact:
Post
by veremin » Jan 30, 2019 6:22 pm
1 person likes this post
As far as I know, you have to pay deletion fee, only if blob has not stayed in Archive Tier for 180 days (it's called
early deletion period). Thanks!
-
networkup
- Influencer
- Posts: 14
- Liked: never
- Joined: Oct 09, 2018 1:03 pm
- Full Name: Adam Wilkinson
-
Contact:
Post
by networkup » Jan 30, 2019 7:00 pm
this post
I have my scale up setup so it goes to the NAS then for my capacity tier I have it set at 0 days. The full backup has been taken and is sitting on the NAS 2 hours later the backup has not made it into azure blob storage. How long does it take? Once it does appear in the blob storage will it be deleted from the NAS? If it is deleted from the NAS is there anyway to configure that not to happen and just have it as a copy only back in Azure?
Would appreciate any help, struggling to understand the documentation not finding it very clear at all
-
Gostev
- SVP, Product Management
- Posts: 25094
- Liked: 3673 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
-
Contact:
Post
by Gostev » Jan 30, 2019 8:24 pm
1 person likes this post
Backups belonging to an active chain
will not be uploaded.
Just to remind: Capacity Tier is designed to keep
oldest backup files, which you are unlikely to have to restore from. In classic storage tiering terms, it's called "cold" data. 0-day offload schedule is not the correct way to use Capacity Tier, as something created 2 hours ago is a very much "hot" data that you may have to restore in the next few hours - there's really no point to remove it from on-prem immediately after creation.
-
networkup
- Influencer
- Posts: 14
- Liked: never
- Joined: Oct 09, 2018 1:03 pm
- Full Name: Adam Wilkinson
-
Contact:
Post
by networkup » Jan 31, 2019 3:24 pm
this post
Thank you @gostev this has helped with my understanding greatly.
So if i understand correctly and i set up my backup routine along the lines below with my offload schedule set to 7 days.
Week 1
Monday - Thurs Incremental
Fri Full backup
Week 2
Monday - Thurs Incremental
Fri Full backup
On Friday my full backup from week 1 should be getting offloaded to Azure Blob Storage.
Is my understanding correct? Is there a better way to be doing it?
My only concern is the time week 2 comes and week 1 full backup can be offloaded to Azure the data is 2 weeks old
Thank you for help!
---edit now i have got two full backups the oldest one has been shipped off to Azure blob storage successfully. Should the one which has been shipped of to Azure blob storage have been deleted locally on the NAS as it is still there..?
-
Gostev
- SVP, Product Management
- Posts: 25094
- Liked: 3673 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
-
Contact:
Post
by Gostev » Jan 31, 2019 4:02 pm
2 people like this post
It's not really there, look at its size

what is left on NAS is a VBK stub containing metadata only - this serves as a local cache, for performance considerations and to reduce the number of API calls which object storage providers often charge extra for.
-
networkup
- Influencer
- Posts: 14
- Liked: never
- Joined: Oct 09, 2018 1:03 pm
- Full Name: Adam Wilkinson
-
Contact:
Post
by networkup » Jan 31, 2019 4:04 pm
this post
Thank you so much Gostev!!! Slowly getting my head around this

Your right it is a little VBK file
-
networkup
- Influencer
- Posts: 14
- Liked: never
- Joined: Oct 09, 2018 1:03 pm
- Full Name: Adam Wilkinson
-
Contact:
Post
by networkup » Jan 31, 2019 4:45 pm
this post
One last question for the day, is functionality coming which will allow me to backup to the local NAS and this copy is then also instantly sent up to Azure blob storage so i have the backup in two places? We have DPM by Microsoft which has this functionally, we do a NAS backup and a few hours later it is sent up to Azure storage.
If there is a workaround which would let me do this that would be great , otherwise our Manager will force us onto DPM which i really do not want. Why DPMs cloud backups are spot on we have a lot of problems with it crashing etc.
-
Gostev
- SVP, Product Management
- Posts: 25094
- Liked: 3673 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
-
Contact:
Post
by Gostev » Jan 31, 2019 5:28 pm
1 person likes this post
networkup wrote: ↑Jan 31, 2019 4:45 pm
One last question for the day, is functionality coming which will allow me to backup to the local NAS and this copy is then also instantly sent up to Azure blob storage so i have the backup in two places?
Correct, as per my post on the first page (copy mode).
-
networkup
- Influencer
- Posts: 14
- Liked: never
- Joined: Oct 09, 2018 1:03 pm
- Full Name: Adam Wilkinson
-
Contact:
Post
by networkup » Jan 31, 2019 6:52 pm
this post
Perfect!!
When is the eta on that and version? Is there a beta about I can demo to my manager so he stops pushing for us to ditch it
Found the post apologies glazed over that first time
-
Gostev
- SVP, Product Management
- Posts: 25094
- Liked: 3673 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
-
Contact:
Post
by Gostev » Jan 31, 2019 8:02 pm
this post
It's planned for our 2019 release, however we don't provide ETAs - because we only ship the code When It's Ready™
Beta is a little too early to talk about, considering we just barely shipped the previous release

-
networkup
- Influencer
- Posts: 14
- Liked: never
- Joined: Oct 09, 2018 1:03 pm
- Full Name: Adam Wilkinson
-
Contact:
Post
by networkup » Feb 08, 2019 4:07 pm
this post
Quick question how long do the restore points get stored in Azure blob storage. Is it determined by "Restore points to keep on disk"? on the backup jobs?
So if we have 30 days in the "restore points to keep on disk" in the backup job, on day 31 day will one of the restore points be restored from Azure? Does Veeam send some message up to Azure to do this?
Thanks
-
veremin
- Product Manager
- Posts: 17055
- Liked: 1473 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
-
Contact:
Post
by veremin » Feb 08, 2019 4:23 pm
1 person likes this post
Is it determined by "Restore points to keep on disk"? on the backup jobs?
Correct
So if we have 30 days in the "restore points to keep on disk" in the backup job, on day 31 day will one of the restore points be restored from Azure? Does Veeam send some message up to Azure to do this?
On the 31st day it will be removed from Azure Blob.
Thanks!
-
networkup
- Influencer
- Posts: 14
- Liked: never
- Joined: Oct 09, 2018 1:03 pm
- Full Name: Adam Wilkinson
-
Contact:
Post
by networkup » Feb 12, 2019 4:48 pm
this post
If a scenario came and we lost the Veeam backup and replication server + the SQL db and had to start from scratch with no NAS / no local backups. We install veeam backup and replication on a new VM. All of a sudden the domain controller requires a restore, how do we link Veeam backup and replication back to Azure so we can pull the backups down from the Azure blob and use them for a restore?
Is there an article or advice for this scenario ?
Could not see anything from a quick google
-
veremin
- Product Manager
- Posts: 17055
- Liked: 1473 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
-
Contact:
Post
by veremin » Feb 12, 2019 5:56 pm
2 people like this post
You will need to:
- Install backup server
- Create a Object Storage Repository using the same information (credentials, folder, container)
- Create a Scale-Out Backup Repository
- Attach object storage repository as capacity tier
Backup server will re-scan object storage repository, find previous backups there and download dehydrated copies of them to local extent. After that, you will be able to execute restore process.
Thanks!
-
networkup
- Influencer
- Posts: 14
- Liked: never
- Joined: Oct 09, 2018 1:03 pm
- Full Name: Adam Wilkinson
-
Contact:
Post
by networkup » Feb 13, 2019 10:11 am
this post
Thanks for the great instructions i will try to setup a lab to run through the scenario shortly.
if the backups have been encrypted on the capacity tier when setting up the scale-out repository is there any gotchas we need be aware of?
I assume all we would do when setting the scale-out Backup Repository up is put the password back into the tick box on the capacity tier which reads "Encrypt data uploaded to object storage" and it would be able to decrypt the backups and be able to use them for restores etc?
Thanks

-
veremin
- Product Manager
- Posts: 17055
- Liked: 1473 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
-
Contact:
Post
by veremin » Feb 13, 2019 1:49 pm
this post
Your understanding is correct, you will just need to input the same password that was used for encryption. Thanks!
-
crackocain
- Service Provider
- Posts: 143
- Liked: 9 times
- Joined: Dec 14, 2015 8:20 pm
- Full Name: Mehmet Istanbullu
- Location: Turkey
-
Contact:
Post
by crackocain » Feb 14, 2019 1:51 pm
this post
Hi Vladimir
All of full and incremental backups are downloaded automatically? Can we see the downloading progress?
Thank you.
-
Gostev
- SVP, Product Management
- Posts: 25094
- Liked: 3673 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
-
Contact:
Post
by Gostev » Feb 14, 2019 9:38 pm
this post
Hi Mehmet, what scenario are you talking about? Your question does not seem connected to the discussion. One thing for sure, nothing significant ever gets downloaded automatically without your action, because it costs money with Azure

-
crackocain
- Service Provider
- Posts: 143
- Liked: 9 times
- Joined: Dec 14, 2015 8:20 pm
- Full Name: Mehmet Istanbullu
- Location: Turkey
-
Contact:
Post
by crackocain » Feb 15, 2019 8:57 am
this post
Lost Veeam Backup & Replication server scenario, networkup wrote.
-
Gostev
- SVP, Product Management
- Posts: 25094
- Liked: 3673 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
-
Contact:
Post
by Gostev » Feb 15, 2019 1:52 pm
this post
Then I am confirming - no backups will be downloaded automatically, only metadata. If you want to download actual backups, you will have to manually initiate the process. Normally it's not needed though, because you can restore directly from object storage. So, the only times when you would want to download, are scenarios where you need to perform multiple restores from the same backup. For example, provisioning multiple on-demand sandboxes for developers to play with.
-
mcz
- Expert
- Posts: 308
- Liked: 58 times
- Joined: Jul 19, 2016 8:39 am
- Full Name: Michael
-
Contact:
Post
by mcz » Feb 18, 2019 8:17 am
this post
Question to Anton & Vladimir: Would the mentioned scenario (downloading full + increments) be possible at the moment? AFAIK, at the moment (using U4) you can only "outsource" the oldest restore points (=capacity tier) but this does not belong to the full (vbk). So your latest full would have been on prem which got lost during desaster. Being able to upload all restore points would be part of the next update (can't wait to get it...)
Please correct me if I'm wrong - thanks.
BTW: Where is the "quote" button gone, I don't see it anymore...
-
veremin
- Product Manager
- Posts: 17055
- Liked: 1473 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
-
Contact:
Post
by veremin » Feb 18, 2019 4:41 pm
this post
You're correct - only inactive or sealed part of backup chain gets offloaded to capacity tier. More information can be found
here.
Ability to copy all (not just move oldest) backup files to capacity tier as soon as they are created is planned for future releases indeed.
Thanks!
-
Gostev
- SVP, Product Management
- Posts: 25094
- Liked: 3673 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
-
Contact:
Post
by Gostev » Feb 18, 2019 11:09 pm
this post
mcz wrote: ↑Feb 18, 2019 8:17 am
BTW: Where is the "quote" button gone, I don't see it anymore...
It is purposely disabled to prevent people from quoting the immediate post

you should still see this button on other posts. Thanks!
-
networkup
- Influencer
- Posts: 14
- Liked: never
- Joined: Oct 09, 2018 1:03 pm
- Full Name: Adam Wilkinson
-
Contact:
Post
by networkup » Feb 22, 2019 12:29 pm
this post
Hi guys just after some quick confirmation of my understanding.
We have all of our jobs running reverse incremental backups all week with a full active backup every Saturday. Does this mean Fridays file will stay a VBK and become inactive allowing it to be in a state ready for Azure offload? And Saturdays VBK will be the start of the new active chain?
I have this diagram to explain
https://imgur.com/a/aQILwm2
grey = inactive chain
green = active chain
-
Gostev
- SVP, Product Management
- Posts: 25094
- Liked: 3673 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
-
Contact:
Post
by Gostev » Feb 22, 2019 4:44 pm
this post
Hi Adam, that is correct. Thanks!
-
dimaslan
- Service Provider
- Posts: 53
- Liked: 7 times
- Joined: Jul 01, 2017 8:02 pm
- Full Name: Dimitris Aslanidis
-
Contact:
Post
by dimaslan » Feb 25, 2019 9:50 am
this post
I just saw this mentioned elsewhere, I just wanted to bring it up here as well for verification and because it's relevant: "Capacity" Tier can only be Cold tier Blob and not Archival, correct? I saw this is because once something is moved to Archival tier Veeam will lose connection to it. This of course changes pricing by x5. Is a VLT on Azure the only solution for this for now then?
Even with 30 TB, the difference is significant.
Thanks.
Users browsing this forum: No registered users and 3 guests