Anton, I've now attached object storage (azure) to our SOBR and it is beautiful
BUT: Immutable storage does conflict with the current veeam implementation... Would you like to discuss this topic with your collegues from product management?
It is unlikely that immutable storage will ever be supported for Capacity Tier due to its WORM nature. There are just too many issues around "online storage" use case with most basic stuff that you don't immediately realize, like inability to cleanup incomplete or failed writes even.
However, we will certainly consider immutability for possible future Archive Tier, which is a different use case from Capacity Tier that will likely have vastly different integration approach - due to Glacier-type storage peculiarities and those issues enabling immutability brings. Which also means that will lose most of the benefits of Capacity Tier, and it will no longer be "beautiful" limiting you to true archival scenarios (essentially making it an off-site tape archive replacement).
Not sure if this was answered already, I was just wondering since the Azure blob Archival storage is charging for delete operations as well, won't there be a charge when, let's say in 13 months from now you go and send the new monthly backup and have to delete the oldest one?
I have my scale up setup so it goes to the NAS then for my capacity tier I have it set at 0 days. The full backup has been taken and is sitting on the NAS 2 hours later the backup has not made it into azure blob storage. How long does it take? Once it does appear in the blob storage will it be deleted from the NAS? If it is deleted from the NAS is there anyway to configure that not to happen and just have it as a copy only back in Azure?
Would appreciate any help, struggling to understand the documentation not finding it very clear at all
Just to remind: Capacity Tier is designed to keep oldest backup files, which you are unlikely to have to restore from. In classic storage tiering terms, it's called "cold" data. 0-day offload schedule is not the correct way to use Capacity Tier, as something created 2 hours ago is a very much "hot" data that you may have to restore in the next few hours - there's really no point to remove it from on-prem immediately after creation.
Thank you @gostev this has helped with my understanding greatly.
So if i understand correctly and i set up my backup routine along the lines below with my offload schedule set to 7 days.
Week 1
Monday - Thurs Incremental
Fri Full backup
Week 2
Monday - Thurs Incremental
Fri Full backup
On Friday my full backup from week 1 should be getting offloaded to Azure Blob Storage.
Is my understanding correct? Is there a better way to be doing it?
My only concern is the time week 2 comes and week 1 full backup can be offloaded to Azure the data is 2 weeks old
Thank you for help!
---edit now i have got two full backups the oldest one has been shipped off to Azure blob storage successfully. Should the one which has been shipped of to Azure blob storage have been deleted locally on the NAS as it is still there..?
It's not really there, look at its size what is left on NAS is a VBK stub containing metadata only - this serves as a local cache, for performance considerations and to reduce the number of API calls which object storage providers often charge extra for.
One last question for the day, is functionality coming which will allow me to backup to the local NAS and this copy is then also instantly sent up to Azure blob storage so i have the backup in two places? We have DPM by Microsoft which has this functionally, we do a NAS backup and a few hours later it is sent up to Azure storage.
If there is a workaround which would let me do this that would be great , otherwise our Manager will force us onto DPM which i really do not want. Why DPMs cloud backups are spot on we have a lot of problems with it crashing etc.
networkup wrote: ↑Jan 31, 2019 4:45 pmOne last question for the day, is functionality coming which will allow me to backup to the local NAS and this copy is then also instantly sent up to Azure blob storage so i have the backup in two places?
Correct, as per my post on the first page (copy mode).
Quick question how long do the restore points get stored in Azure blob storage. Is it determined by "Restore points to keep on disk"? on the backup jobs?
So if we have 30 days in the "restore points to keep on disk" in the backup job, on day 31 day will one of the restore points be restored from Azure? Does Veeam send some message up to Azure to do this?
Is it determined by "Restore points to keep on disk"? on the backup jobs?
Correct
So if we have 30 days in the "restore points to keep on disk" in the backup job, on day 31 day will one of the restore points be restored from Azure? Does Veeam send some message up to Azure to do this?
On the 31st day it will be removed from Azure Blob.
If a scenario came and we lost the Veeam backup and replication server + the SQL db and had to start from scratch with no NAS / no local backups. We install veeam backup and replication on a new VM. All of a sudden the domain controller requires a restore, how do we link Veeam backup and replication back to Azure so we can pull the backups down from the Azure blob and use them for a restore?
- Install backup server
- Create a Object Storage Repository using the same information (credentials, folder, container)
- Create a Scale-Out Backup Repository
- Attach object storage repository as capacity tier
Backup server will re-scan object storage repository, find previous backups there and download dehydrated copies of them to local extent. After that, you will be able to execute restore process.
Thanks for the great instructions i will try to setup a lab to run through the scenario shortly.
if the backups have been encrypted on the capacity tier when setting up the scale-out repository is there any gotchas we need be aware of?
I assume all we would do when setting the scale-out Backup Repository up is put the password back into the tick box on the capacity tier which reads "Encrypt data uploaded to object storage" and it would be able to decrypt the backups and be able to use them for restores etc?
Hi Mehmet, what scenario are you talking about? Your question does not seem connected to the discussion. One thing for sure, nothing significant ever gets downloaded automatically without your action, because it costs money with Azure
Then I am confirming - no backups will be downloaded automatically, only metadata. If you want to download actual backups, you will have to manually initiate the process. Normally it's not needed though, because you can restore directly from object storage. So, the only times when you would want to download, are scenarios where you need to perform multiple restores from the same backup. For example, provisioning multiple on-demand sandboxes for developers to play with.
Question to Anton & Vladimir: Would the mentioned scenario (downloading full + increments) be possible at the moment? AFAIK, at the moment (using U4) you can only "outsource" the oldest restore points (=capacity tier) but this does not belong to the full (vbk). So your latest full would have been on prem which got lost during desaster. Being able to upload all restore points would be part of the next update (can't wait to get it...)
Please correct me if I'm wrong - thanks.
BTW: Where is the "quote" button gone, I don't see it anymore...
Hi guys just after some quick confirmation of my understanding.
We have all of our jobs running reverse incremental backups all week with a full active backup every Saturday. Does this mean Fridays file will stay a VBK and become inactive allowing it to be in a state ready for Azure offload? And Saturdays VBK will be the start of the new active chain?
I just saw this mentioned elsewhere, I just wanted to bring it up here as well for verification and because it's relevant: "Capacity" Tier can only be Cold tier Blob and not Archival, correct? I saw this is because once something is moved to Archival tier Veeam will lose connection to it. This of course changes pricing by x5. Is a VLT on Azure the only solution for this for now then?
Even with 30 TB, the difference is significant.
Thanks.