Discussions specific to object storage
victor.bylin@atea.se
Service Provider
Posts: 23
Liked: 1 time
Joined: Oct 26, 2017 11:22 am
Full Name: Victor
Contact:

Been testing out Update 4 and S3, some questions

Post by victor.bylin@atea.se » Jan 07, 2019 11:56 am

Hi,

I have been given early access on update 4 and have been testing out S3 from a Scalityn on-prem.
First of all, I don't like the restrictions that you must have Scale out and cannot copy to it directly.
The design, in this case, is GFS and spreads the first selection between two sites.
It is two big backup proxies that act repositories as well.
They handle all backups in the initial step but archiving isn't a part of there design.
Archive repository is the Scality.
There is where the scaleout is a big problem now I have to use the backup proxies as a staging for all archive backups which is a lot of data instead of sending it directly.
With that design having a performance tier as staging for archiving gives double backups on sites when the archive job is another selection then the first job.
So you understand the first selection is based on storage/luns.
So one job per lun.
Second selection on arching is based on the customers choice if they want longer retention.
Which is on VM level.
Is there any plans in the future including S3 repository as a regular repository that you can backup copy to?
If not can you create a feature request on that?

Secondly is there any reg key for how often the offloading process runs?
As I can see in my test now it is running every 4th hour.
I want it to check even sooner so that as a small portion of the data is on performance tier.

One more question regarding the override option on scaleout.
How often does it checks disk space and days the backup has been on disk?

Best regards!
Victor

anthonyspiteri79
Veeam Software
Posts: 438
Liked: 80 times
Joined: Jan 14, 2016 6:48 am
Full Name: Anthony Spiteri
Location: Perth, Australia
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by anthonyspiteri79 » Jan 07, 2019 1:28 pm 1 person likes this post

Hey there Victor.

The Cloud Tier as it appears in Update 4 is a way to move data from more expensive storage to what is relatively cheaper storage based on policies set within the properties of a Scale Out Backup Repository. As the name suggests it is a tiering feature and not archival one, which is where your questions are coming from.

That said, I'll address the easier part of you post first...that is the offloading process.

There is a manual way to do this from the UI to force the job. If you control-click on the SOBR name as shown below there is an option to "Run Tiering Job Now" This will run the job on demand.

Image

There is also a PowerShell Command that you can run to achieve the same result

Code: Select all

Start-VBRCapacityTierSync -Repository SOBRNAME
You could obviously run this as a scheduled task if desired.

In terms of the question around how often we check for the override option, we are tracking that in the database and it's worked out based on known % of remaining space of the extents, but I am seeking further clarification on the mechanisms used. Stand bye for that.
Anthony Spiteri
Global Technologist, Product Strategy | VMware vExpert
Email: anthony.spiteri@veeam.com | Mobile: +61488335699
Twitter: @anthonyspiteri | Skype: anthony_spiteri

victor.bylin@atea.se
Service Provider
Posts: 23
Liked: 1 time
Joined: Oct 26, 2017 11:22 am
Full Name: Victor
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by victor.bylin@atea.se » Jan 08, 2019 12:57 pm

Hi Anthony,

Okay, so regarding using S3 directly for Backup Copy jobs there are no plans for that?
Because as you probably understand S3 storage in our case is the archival storage that we want to Backup Copy directly to.

Thanks for the Powershell Command.

Looking forward to hear from you regarding the options.

Best Regards!
Victor

Gostev
SVP, Product Management
Posts: 23624
Liked: 3119 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by Gostev » Jan 08, 2019 3:45 pm 4 people like this post

Hi, Victor - something very similar to this is in plans, just much simplified (no dealing with Backup Copy jobs).

Basically, in the Update 4 there's only "move" mode available for the Capacity Tier, where oldest backups are moved from Performance Tier to Capacity Tier. However, in the next update we're also planning to add "copy" mode, where ALL backup files created in Performance Tier will be copied to Capacity Tier as soon as they are appear.

This is actually the reason behind the current UI design, where there's the "move" check box that you cannot uncheck :) this does not make much sense today, because it's done in preparation for the final look of the corresponding wizard's step, when both move and copy options will be available, and any combination supported.

Thanks!

victor.bylin@atea.se
Service Provider
Posts: 23
Liked: 1 time
Joined: Oct 26, 2017 11:22 am
Full Name: Victor
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by victor.bylin@atea.se » Jan 09, 2019 3:54 pm

Hi Gostev,

Very nice to hear.
Do you any ETA for that update?

Thanks a lot for the quick responses!

Best regards!
Victor

Gostev
SVP, Product Management
Posts: 23624
Liked: 3119 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by Gostev » Jan 09, 2019 4:27 pm

Sometimes in H2 2019 perhaps... we're just too early in the release cycle to estimate.

AcrisureRW
Novice
Posts: 8
Liked: 2 times
Joined: Mar 20, 2018 7:51 pm
Full Name: Ryan Walker
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by AcrisureRW » Jan 09, 2019 4:37 pm

Gostev wrote:
Jan 08, 2019 3:45 pm
Hi, Victor - something very similar to this is in plans, just much simplified (no dealing with Backup Copy jobs).

Basically, in the Update 4 there's only "move" mode available for the Capacity Tier, where oldest backups are moved from Performance Tier to Capacity Tier. However, in the next update we're also planning to add "copy" mode, where ALL backup files created in Performance Tier will be copied to Capacity Tier as soon as they are appear.

This is actually the reason behind the current UI design, where there's the "move" check box that you cannot uncheck :) this does not make much sense today, because it's done in preparation for the final look of the corresponding wizard's step, when both move and copy options will be available, and any combination supported.
Good to know! This is actually exactly what we'd be looking for ourselves (the way it is) as we will have 2 off-site copies, one in a Private Data Center/Cloud repository, and then once it ages to X it should be moved fully up to a public cloud (s3 is our current choice) for long term retention.

Gostev
SVP, Product Management
Posts: 23624
Liked: 3119 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by Gostev » Jan 09, 2019 5:13 pm 1 person likes this post

Yes, that's the plan - and is exactly how it will work if you have both "copy" and "move" check boxes selected at the same time.

DE&C
Service Provider
Posts: 2
Liked: never
Joined: Aug 07, 2017 11:51 am
Full Name: Doron William
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by DE&C » Jan 14, 2019 7:10 am

Gostev wrote:
Jan 08, 2019 3:45 pm
Hi, Victor - something very similar to this is in plans, just much simplified (no dealing with Backup Copy jobs).

Basically, in the Update 4 there's only "move" mode available for the Capacity Tier, where oldest backups are moved from Performance Tier to Capacity Tier. However, in the next update we're also planning to add "copy" mode, where ALL backup files created in Performance Tier will be copied to Capacity Tier as soon as they are appear.

This is actually the reason behind the current UI design, where there's the "move" check box that you cannot uncheck :) this does not make much sense today, because it's done in preparation for the final look of the corresponding wizard's step, when both move and copy options will be available, and any combination supported.

Thanks!
Hi Anton

Good to hear, that this is on the roadmap. I have a lot of customers that are looking for a way to have a copy on object store (Ceph, Cloudian). Their issue is not addressed with U4.

It would make customers live much simpler, If they could use S3 as they use tape now to have a offsite copy (and of course in an automated way).

Looking forward to the feature.
Doron

joebranca
Influencer
Posts: 14
Liked: never
Joined: Oct 28, 2015 9:36 pm
Full Name: Joe Brancaleone
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by joebranca » Jan 15, 2019 7:10 pm

Question related to this: I installed Update 4 and set up an S3 bucket to test/implement cloud tiering. When configuring the cloud extent to point to the bucket (using keys from a new IAM user setup just for this), it looks like it requires a specific folder in the bucket. However the bucket folder we created is not browsable from the extent setup. This is peculiar because doing an ls command from the AWS cli shows the folder. Is there some additional IAM Action needed for the user to make the folder read-writable?

anthonyspiteri79
Veeam Software
Posts: 438
Liked: 80 times
Joined: Jan 14, 2016 6:48 am
Full Name: Anthony Spiteri
Location: Perth, Australia
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by anthonyspiteri79 » Jan 16, 2019 12:41 am

Hey there Joe.

You should create the folder as part of the Object Storage Repository setup through the wizard. There are also PowerShell commands to do the same. In terms of what you have seen, that might be related to the way in which we create the folder from the Veeam Backup & Replication console.
Anthony Spiteri
Global Technologist, Product Strategy | VMware vExpert
Email: anthony.spiteri@veeam.com | Mobile: +61488335699
Twitter: @anthonyspiteri | Skype: anthony_spiteri

joebranca
Influencer
Posts: 14
Liked: never
Joined: Oct 28, 2015 9:36 pm
Full Name: Joe Brancaleone
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by joebranca » Jan 16, 2019 5:36 pm

Ah ok, got it. I was going through the wrong wizard. What is the intended difference in setting up External Repository for S3 vs creating new Repository and selecting Object Storage -> S3 repository?

Also does it make sense for the IAM user to have a Delete Object permission for the data in the bucket? Does Veeam have the functionality to go in and delete backup data?

v.Eremin
Product Manager
Posts: 15781
Liked: 1241 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by v.Eremin » Jan 16, 2019 6:31 pm

What is the intended difference in setting up External Repository for S3 vs creating new Repository and selecting Object Storage -> S3 repository?
External Repository is an S3 repository created by Cloud Protection Manager to store long-term backups there. Once created and filled with CPM backup data, it can be then added to a backup server for further backup discovery, data recovery or data offload (to on-prem repository via backup copy job).

S3 Object Storage Repository is a capacity extent of Scale-Out Backup Repository to which backup files get offloaded, once they age out of operational restore window.
Does Veeam have the functionality to go in and delete backup data?
Correct, as those files will be deleted, once they fall out of backup job retention period.

Thanks!

sandsturm
Enthusiast
Posts: 93
Liked: 9 times
Joined: Mar 23, 2015 8:30 am
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by sandsturm » Jan 16, 2019 8:03 pm

If I have configured a SOBR with S3 for older files and I will loose my complete OnPremise SOBR due to a disaster. Am I able to restore any data from the S3 bucket without having a functional SOBR or are these files useless in that case?

Thx,
Sandsturm

Gostev
SVP, Product Management
Posts: 23624
Liked: 3119 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Been testing out Update 4 and S3, some questions

Post by Gostev » Jan 16, 2019 9:00 pm 1 person likes this post

Yes, you will be able to restore all data from the S3 bucket. You still need a functional SOBR to have backup file shells automatically re-created there before performing the restore, but it can be just a single extent - and you don't need much space since those VBK shells contain metadata only. Basically, in case of a complete disaster, all your needs are covered by installing B&R on your laptop - and you can start performing restores from there.

Post Reply

Who is online

Users browsing this forum: No registered users and 8 guests