Discussions specific to object storage
sandsturm
Enthusiast
Posts: 99
Liked: 10 times
Joined: Mar 23, 2015 8:30 am
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by sandsturm » Jan 17, 2019 7:26 am

Hi gostev

Thanks for your answer, that sound great. Just for my understanding: Are these VBK shells containing the metadata stored on the SOBR AND on the S3-bucket as well? Or how does this process with the recreation of this VBK shells work, where is this information stored?

thx,
sandsturm

mcz
Expert
Posts: 252
Liked: 48 times
Joined: Jul 19, 2016 8:39 am
Full Name: Michael
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by mcz » Jan 17, 2019 10:44 am 3 people like this post

sandsturm,

AFAIK, metadata are stored on SOBR and on the cloud storage, as you already assumed. The shells only contain the metadata and the metadata points to the location of the data (in this case it would be the cloud storage). So if your SOBR is running fine, veeam reads the metadata from the SOBR, sees that the data is in the cloud and fetches it from there. If your SOBR had a disaster, don't worry because metadata has also been transferred to the cloud storage and therefore you will be able to do the restore.

Please correct me if I'm wrong but that's what I've kept in mind from the last VeeamON event.

Gostev
SVP, Product Management
Posts: 24016
Liked: 3252 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by Gostev » Jan 17, 2019 5:58 pm

Michael, you are spot on.

sandsturm
Enthusiast
Posts: 99
Liked: 10 times
Joined: Mar 23, 2015 8:30 am
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by sandsturm » Jan 21, 2019 6:48 am 2 people like this post

Very well!

Something else would have surprised me very much with veeam also :-)

thx,
sandsturm

mcz
Expert
Posts: 252
Liked: 48 times
Joined: Jul 19, 2016 8:39 am
Full Name: Michael
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by mcz » Jan 24, 2019 1:13 pm

anthonyspiteri79 wrote:
Jan 07, 2019 1:28 pm
There is a manual way to do this from the UI to force the job. If you control-click on the SOBR name as shown below there is an option to "Run Tiering Job Now" This will run the job on demand.

Image

There is also a PowerShell Command that you can run to achieve the same result

Code: Select all

Start-VBRCapacityTierSync -Repository SOBRNAME
Hi Anthony,

I'm using the GA-build of Update 4 and I've setup a SOBR with capacity tier (azure blob storage). While the powershell command works, I'm not able to select the option in the context menu:

Image

Am I missing somehing?

v.eremin
Product Manager
Posts: 16130
Liked: 1314 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by v.eremin » Jan 24, 2019 3:04 pm

Have you pressed ctrl? It should be ctrl+right click.

mcz
Expert
Posts: 252
Liked: 48 times
Joined: Jul 19, 2016 8:39 am
Full Name: Michael
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by mcz » Jan 28, 2019 8:28 am

oh, it seems I've misunderstood the word "control-click". Of course it works if press ctrl, so thankts for the clarification, Vladimir. BTW: Why is this option "hidden"? Wouldn't it make sense to always access it in the context menu?

v.eremin
Product Manager
Posts: 16130
Liked: 1314 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by v.eremin » Jan 28, 2019 1:26 pm

We don't believe there is a need to use this option other than on a special occasion, since backup are automatically scanned every 4 hours anyway to determine candidates and start offload to Capacity Tier. It's hard to imagine someone wanting to continuously trigger this rescan before the next automatic run, other than perhaps for the purpose of doing demo or POC. Thanks!

victor.bylin@atea.se
Service Provider
Posts: 25
Liked: 1 time
Joined: Oct 26, 2017 11:22 am
Full Name: Victor
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by victor.bylin@atea.se » Jan 30, 2019 1:04 pm

Hi again,

Just got some news that I didn't know and pretty much sends all my designs that backup all in the first step to worthless for adding S3.

"Second, please note that for Backup Copy jobs only fulls with assigned GFS flag are moved to capacity tier (for a reference, please check next User Guide page: https://helpcenter.veeam.com/docs/backu ... l?ver=95u4, 'Inactive Backup Chain for Backup Copy Job' section).

Backup Chain Legitimacy - Veeam Backup Guide for vSphere
Before transferring data to object storage repositories, Veeam validates the backup chain state to ensure that the restore points to be offloaded belong to an inactive backup chain. Inactive Backup...
https://helpcenter.veeam.com/docs/backu ... l?ver=95u4, 'Inactive Backup Chain for Backup Copy Job' section).

This means for a design where you backup all with storage snapshot and spread all backup to two sites.
Then you want to spread some backups/vms, (think customers that pay more) out to your third site with S3 storage.
Then you can only send weekly backups to your third site.
Not daily which means that you will have 6 days of no backup on S3.
How did you think that is a good design and what lead to that decision?

In my opinion, S3 is the new storage that gonna explode in implementations on prem and the cloud in the coming years.
So having a good design for it is very important.

Best Regards!

Gostev
SVP, Product Management
Posts: 24016
Liked: 3252 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by Gostev » Jan 30, 2019 4:55 pm 2 people like this post

I actually do agree with you that S3 storage has a great potential to explode in the coming years, however reliability today is paramount - and we're not comfortable with going "all in" on its support right away for all possible use cases. Yes, we do learn on our mistakes, remember the ReFS story? :wink:

This is exactly why we started to leverage S3 storage from your least important data (oldest backups). And in hindsight, it was a great decision - as the number of storage-specific quirks we encountered in testing of real-world S3 storage implementations was totally unforeseen by us.

victor.bylin@atea.se
Service Provider
Posts: 25
Liked: 1 time
Joined: Oct 26, 2017 11:22 am
Full Name: Victor
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by victor.bylin@atea.se » Jan 31, 2019 8:02 am

Hi Gostev,

Thanks for the quick reply.
Yea I do understand that you need to be able to support your product on your side.
But can you guys put in a feature request regarding not needing the active chain on performance tier.
So that you can send the inc data to S3 with Backup copy jobs.

Best regards!

v.eremin
Product Manager
Posts: 16130
Liked: 1314 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by v.eremin » Jan 31, 2019 11:45 am 2 people like this post

In fact, support for Capacity Tier copy mode (copying backups to object storage as soon as they are created) is already scheduled for the next product release. Thanks!

Gostev
SVP, Product Management
Posts: 24016
Liked: 3252 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by Gostev » Jan 31, 2019 9:36 pm 2 people like this post

Yep, I can confirm devs are working on it as we speak! :D

collinp
Expert
Posts: 141
Liked: 10 times
Joined: Feb 14, 2012 8:56 pm
Full Name: Collin P
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by collinp » Feb 09, 2019 3:13 am

I’m having a hard time understanding this. We don’t use backup copy jobs. I would like to backup directly to S3. So if we lose our data center and our local scaleout
Repository, last nights backups are in S3. We do a monthly active full and daily incrementals. I don’t mind also storing data locally for quick restores but the most important feature is that backups go offsite as soon as possible either while the backup is running or shortly after the backup finishes.

Is this possible with the current version either natively or by scheduling the powershell sync command frequently?

anthonyspiteri79
Veeam Software
Posts: 528
Liked: 110 times
Joined: Jan 14, 2016 6:48 am
Full Name: Anthony Spiteri
Location: Perth, Australia
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by anthonyspiteri79 » Feb 10, 2019 9:08 am

Hey there Collin.

For your particular use case you should still be considering Cloud Connect Backup to a Veeam Cloud and Service Provider partner. That is the only solution that will suit your requirements.

There is nothing stopping you using Backup Copy Jobs with Cloud Tier, but you have to understand the correlation between when a backup chain is sealed and when it falls outside of the operational restore window. If you configure GFS backups, those are certain candidates for offloading to Capacity Tier. If you loose your datacenter and just use Cloud Tier, anything that falls outside of the operational restore window will be lost.

With monthly active fulls it will take up until that is completed for the chain to be sealed and then data offloaded to the Capacity Tier.

For data to go offsite ASAP, you need to consider Cloud Connect Backup.
Anthony Spiteri
Global Technologist, Product Strategy | VMware vExpert
Email: anthony.spiteri@veeam.com | Mobile: +61488335699
Twitter: @anthonyspiteri | Skype: anthony_spiteri

Post Reply

Who is online

Users browsing this forum: No registered users and 2 guests