Discussions specific to object storage support as backup target
Post Reply
galbitz
Novice
Posts: 5
Liked: never
Joined: May 17, 2018 2:22 pm
Full Name: grant albitz
Contact:

veeam 10 copy to cloud seems very limited

Post by galbitz » Feb 19, 2020 4:37 am

So I may just be missing something here. But from what I can tell this feature is extremely limited even still in v10.

Here is what I want to accomplish, and I would think most MSP organizations would have similar goals.

1. Normal veeam backup of machines, In this case its a vsphere job. We keep around 30 retention points locally.
2. Copy of critical systems backup to s3/wasabi (wasabi in my case), only 2-3 retention points. We are only going to use the cloud recovery if something happens locally, IE ransomware etc.

Here is what i tried so far.

1. Created a wasabi backup repository. - No issues
2. Attempted to create a backup copy job and target the wasabi backup repository, no luck. not an option to select an object s3 repo, only see my local (created a second one since initial list was blank)
3. Created a scale out repository. I see the option to extend scaleout backup repository capacity with object storage. I select the repository in question. I see the option to copy to object storage, but now if i do this everything I backup to this repo will automatically go to S3. Thats kinda the idea but truthfully there is only one repo for most of my existing customers. I feel that the s3 object copy should be a job setting not a repo setting. Ignoring this fact and pretending its ok that all jobs on this repo go to the cloud I have my next problem.
4. There does not seem to be retention settings for how many restore points you want to keep for the s3 storage specifically. I only see retention policies enforced at the job level, and that applies to both cloud and on prem.

Am i missing something here? Is there a reason a backup copy job still cant target a s3 repo in v10?

HannesK
Veeam Software
Posts: 5082
Liked: 676 times
Joined: Sep 01, 2014 11:46 am
Location: Austria
Contact:

Re: veeam 10 copy to cloud seems very limited

Post by HannesK » Feb 19, 2020 6:24 am

Hello,
and welcome to the forum.
Copy of critical systems backup to s3/wasabi (wasabi in my case), only 2-3 retention points. We are only going to use the cloud recovery if something happens locally, IE ransomware etc.
our goal was to have a very simple "single checkbox copy policy". your use case is by far more complex as you have to think about "which VM for how many restore points", then create extra jobs for that... that was not our design goal.

But of cause: your use case is valid. We also hear it from customers. You did not miss anything. Today there is no option to write directly to object storage (backup job / backup copy job).

Best regards,
Hannes

veremin
Product Manager
Posts: 17428
Liked: 1552 times
Joined: Oct 26, 2012 3:28 pm
Full Name: Vladimir Eremin
Contact:

Re: veeam 10 copy to cloud seems very limited

Post by veremin » Feb 19, 2020 12:16 pm

The best idea in your case will be to:

* create object storage repository
* create additional SOBR
* add the object storage repository as Capacity Tier to it
* enable copy mode
* point backup copy job containing mission critical VMs to the SOBR

This way, you will have only restore points of mission critical VMs copied to object storage.

Thanks!

galbitz
Novice
Posts: 5
Liked: never
Joined: May 17, 2018 2:22 pm
Full Name: grant albitz
Contact:

Re: veeam 10 copy to cloud seems very limited

Post by galbitz » Feb 19, 2020 1:28 pm

Thank for your quick reply. And I am just following up to better understand the limitations and approach taken.

I guess my biggest level of confusion is that the copy to cloud functionality exists within the repo, not the backup job. This creates a fairly significant issue.

Right now say I have a customer that has 1tb used space in their production environment and a 3-4tb repo. This has served them well and given them a fair amount of retention points locally. Now we wish to back up some of their data to the cloud.

It seems the only way at this moment would be to create a separate backup job, with a separate repo and configure this repo to do the copy to cloud function, and reserve that repo for only items we want to copy to the cloud. Without purchasing more local storage we are going to need to reduce their primary repo space by 1-1.5tb of space so that we have this repo with enough storage space for the initial backup job as well as a say 3 retention points. This obviously leads to a weird and inefficient use of the local storage but I believe it would work:

Backup job 1. Local only, to the initial but resized ~2tb repo set with however retention points we can get.
Backup job 2. Local to the new 1.5tb scale out repo that has the immediate copy to cloud functionality enabled. 3 retention points.

Obviously this scenario is less than optimal, it requires 2 full backups for the same data and a really bad manually partitioned scheme for the repos that is sure to result in one repo being full and the other having unused space. I am obviously not able to explain the intricacies of the technical challenges you face on the back end but it would seem atleast somewhat beneficial if the backup job could have the copy to cloud functionality setting associated to it so that we could accomplish the 2 backup job scenario but rely on 1 single repo for both jobs.

Maybe i dont need two different jobs, but i believe i may since the goal here is not to keep the cloud bound data around for more than a few copies, were as locally we want to keep as many as we can. I get that local storage is cheap, but the whole cloud via a scale out repo seems really broken and limited.

Why is the copy to cloud functionality so closely tied to the scale out repo? If we had the option to do a backup copy job directly to the s3 cloud storage with its own retention setting it seemingly solve these requirements.

dalbertson
Veeam Software
Posts: 169
Liked: 46 times
Joined: Jul 21, 2015 12:38 pm
Full Name: Dustin Albertson
Contact:

Re: veeam 10 copy to cloud seems very limited

Post by dalbertson » Feb 19, 2020 1:46 pm 2 people like this post

@galbitz You can do a backup copy job to cloud tier. Create a new SOBR and point a backup copy job to it and have it offload to S3.

galbitz
Novice
Posts: 5
Liked: never
Joined: May 17, 2018 2:22 pm
Full Name: grant albitz
Contact:

Re: veeam 10 copy to cloud seems very limited

Post by galbitz » Feb 19, 2020 3:33 pm

Thanks, what are the sizing guidelines for a SOBR that is really only meant to be used as a s3 copy. I assume its not possible to undersize this and it must be large enough to hold the initial full locally + any restore points.

dalbertson
Veeam Software
Posts: 169
Liked: 46 times
Joined: Jul 21, 2015 12:38 pm
Full Name: Dustin Albertson
Contact:

Re: veeam 10 copy to cloud seems very limited

Post by dalbertson » Feb 19, 2020 3:39 pm

The sizing guidelines i would follow are the same as for any repo. Depending upon the type of tiering you choose (copy or move or both) will dictate how much space will be needed. But i would size off of the normal sizing guidelines and not try to estimate the minimum usage as that would be harder to dictate due to potential issues you may run into, like bandwidth, network issues, time constraints etc and that being undersized would cause the job to error out. So if you have to space available oversize it a bit and then you can be safer than sorry.

galbitz
Novice
Posts: 5
Liked: never
Joined: May 17, 2018 2:22 pm
Full Name: grant albitz
Contact:

Re: veeam 10 copy to cloud seems very limited

Post by galbitz » Feb 20, 2020 3:43 am

Just a follow up after some testing.

I created a small scale out repo that is set to immediately copy data to object storage.

I created a file share backup job, backed up about 3gb of data. I created a secondary target in this job to the scale out repo indicated above. This in turn created the associated backup copy job. After running the backup I see that the backup copy job successfully copied it to my local scale out repo that has the object storage associated. There was no files uploaded to the object storage (wasabi) I do see that it created a veeam/archive path but nothing further.

HannesK
Veeam Software
Posts: 5082
Liked: 676 times
Joined: Sep 01, 2014 11:46 am
Location: Austria
Contact:

Re: veeam 10 copy to cloud seems very limited

Post by HannesK » Feb 20, 2020 8:04 am

There was no files uploaded to the object storage
that's correct. object storage with NAS backups is only supported for long-term retention of the primary repository.

capacity tier of SOBR will be ignored for NAS backup (copy) jobs.

RossFawcett
Service Provider
Posts: 17
Liked: 2 times
Joined: Jul 14, 2014 8:49 am
Full Name: Ross Fawcett
Location: Perth, Western Australia
Contact:

Re: veeam 10 copy to cloud seems very limited

Post by RossFawcett » Feb 28, 2020 1:46 am

galbitz wrote:
Feb 19, 2020 4:37 am
Am i missing something here? Is there a reason a backup copy job still cant target a s3 repo in v10?
My read between the lines take on this has been, they want you to use cloud connect if you need that kind of simple functionality, and if they added such a simple feature like this to the product it would have ruined many of the cloud connect providers (read partner network) who provide simple offsite backup storage using it. It is frustrating that in this day and age we don't have that functionality which has been requested for years.
HannesK wrote:
Feb 19, 2020 6:24 am
our goal was to have a very simple "single checkbox copy policy". your use case is by far more complex as you have to think about "which VM for how many restore points", then create extra jobs for that... that was not our design goal.

But of cause: your use case is valid. We also hear it from customers. You did not miss anything. Today there is no option to write directly to object storage (backup job / backup copy job).
Hardly any more complex than the standard backup policies. I'd argue this whole SOBR design with copied extents is far more complicated than a simple backup copy job to S3/Azure blob given you also say it works a lot like ReFS and block clone. Not to mention you offer the simple backup copy job functionality to your cloud connect providers..
dalbertson wrote:
Feb 19, 2020 1:46 pm
@galbitz You can do a backup copy job to cloud tier. Create a new SOBR and point a backup copy job to it and have it offload to S3.
That's a very roundabout way of achieving it, because your SOBR will still need to have space on prem first to stage it, e.g. I now have to have two sets of the same backups on prem before I can replicate one to Azure blob. Granted you could use the move mode to offload the blocks rather than copy mode but it's still very much a workaround rather than a simple backup copy job to Azure/S3 blob directly. And move relies on sealed chains which has its own set of issues.

Don't get me wrong, the Veeam backup product is amazing, it totally blew away the competition for virtual machine backup and has been my go to product for backup for years (I think around version 4-5 was when we started using it). But the lack of a simple backup copy job to Azure/S3 blob is pretty disappointing. And it's clear from the confusion we constantly see both here on the Veeam forums, across Reddit, twitter and anywhere people talk about the new blob storage support, that people were expecting it to be more like a simple copy job. E.g. expecting it to be the simple but powerful functionality we have come to rely on from Veeam.

dalbertson
Veeam Software
Posts: 169
Liked: 46 times
Joined: Jul 21, 2015 12:38 pm
Full Name: Dustin Albertson
Contact:

Re: veeam 10 copy to cloud seems very limited

Post by dalbertson » Feb 28, 2020 2:14 am

A backup copy job can be sent to a SOBR and tiered off to object. (Tiering closed chain)

In fairness what people expect and want are usually unknown to them until a feature is announced or released. I can’t count the times that I have run into conversations where people want to do things without fully thinking the solution out.

Many want to move to object because it’s “cheap” but just coping or moving a traditional backup file doesn’t make sense In this situation as the file is not designed for that. Plus when veeam approaches a topic it’s thought out from many different points of view and encompasses the entire picture. Meaning that getting data to object is easy, but what happens when you need to restore, what happens when we write data, how do we keep the writes down, how do we avoid egress, how do we make restores efficient, how can we make it responsive when you need it. What about how we index data, how do we make it simple for the end user? I could keep going on and on.

What v10 is for me is a period of time where we are at a juncture between two worlds, one on prem (legacy) and one in the cloud. It’s a period of learning for everyone when these new features are out, but look at our history of innovation and think back to when those features and products were released. Change is constant and new ways of doing things will need to be adapted. Think about how we used to backup to tape directly and then sent that to a vault.

That’s just my thoughts and doesn’t mean there right or wrong. I’m purely speaking from my POV and not anyone else.


Edit *** - if the point is to get a backup copy to object then what’s wrong with copy mode and doing it all in one job? It’s a change in strategy not a loss of features.

There always comes a point where new technology requires a change in status quo

RossFawcett
Service Provider
Posts: 17
Liked: 2 times
Joined: Jul 14, 2014 8:49 am
Full Name: Ross Fawcett
Location: Perth, Western Australia
Contact:

Re: veeam 10 copy to cloud seems very limited

Post by RossFawcett » Feb 28, 2020 5:43 am

dalbertson wrote: A backup copy job can be sent to a SOBR and tiered off to object. (Tiering closed chain)
I understand that, that was in the 9.5u4 release. And even then people complained then that whilst it had value, that they still wanted to effectively replicate jobs straight into object based storage. Copy job allows this in a round about way.
dalbertson wrote: In fairness what people expect and want are usually unknown to them until a feature is announced or released. I can’t count the times that I have run into conversations where people want to do things without fully thinking the solution out.
The fact there are white papers demonstrating how to integrate object storage as a VTL implies that it's something people have wanted to do. Granted at that stage it was in some ways treated as a way to utilise things like Glacier as a tape equivalent, but the general idea of being able to simply run a copy job to replicate a backup offsite to object storage has been around for some time.
dalbertson wrote: Many want to move to object because it’s “cheap” but just coping or moving a traditional backup file doesn’t make sense In this situation as the file is not designed for that. Plus when veeam approaches a topic it’s thought out from many different points of view and encompasses the entire picture. Meaning that getting data to object is easy, but what happens when you need to restore, what happens when we write data, how do we keep the writes down, how do we avoid egress, how do we make restores efficient, how can we make it responsive when you need it. What about how we index data, how do we make it simple for the end user? I could keep going on and on.
Absolutely there is a cost perspective to it, and yes picking up the file as is and dropping it into object storage would not make sense given the transaction and API costs. So I do appreciate the advantage that Veeam has in the way the object storage works, e.g. in a way it's a lot like what can be done with ReFS where blocks can be cloned etc (notwithstanding my other ticket about it not working as expected). However would you not agree that having to have a SOBR to achieve a simple backup copy job into object storage seems to be over complicating it? I don't disagree that SOBR also has value, but there is something to be said to allow the flexibility of a standard backup copy job to object storage like what I could do today using a copy job to an ReFS based repository.
dalbertson wrote: What v10 is for me is a period of time where we are at a juncture between two worlds, one on prem (legacy) and one in the cloud. It’s a period of learning for everyone when these new features are out, but look at our history of innovation and think back to when those features and products were released. Change is constant and new ways of doing things will need to be adapted. Think about how we used to backup to tape directly and then sent that to a vault.
I agree that we are between two worlds, though I wouldn't call on prem legacy by any means. Software as a service definitely makes sense in the cloud, but platform or infrastructure in the cloud isn't always the right fit, both from a technology perspective, and a cost perspective. As much as we'd all love to have unlimited budgets, cloud is not necessarily cheaper depending on how it is approached. We spend a lot of time working with customers to ensure that whether they are cloud or on premise that it ultimately meets their business requirements, but this has many facets such as performance (think network latency/access), features (many SaaS versions of the on prem product still do not have feature parity), security (though this falls more into compliance like the fact you may not be able to show exactly where all your data is to meet some customer/legislative requirements), and of course at the end of the day cost.

I guess the issue for us is that most of our customers were using cloud connect simply to have a copy of their backup offsite. And that we can do simple copy jobs to meet customer requirements quite easily through that solution, but that we can't do it with object storage without going through a convoluted process with SOBR makes it very much feel like protecting the cloud connect partners rather than providing us with a simple mechanism to push a backup offsite. And whilst I acknowledge that there are benefits to cloud connect for DRaaS, but for many customers DR doesn't actually mean a fully replica VM somewhere. And for those that do want that level of replication, often they are wanting replication levels that Veeam cannot meet with the snapshot based replication design. Plus to be honest, there is also a very big cost factor in maintaining replicas in this way, not saying cloud doesn't have advantages in not having to maintain hardware, but the density of compute we have today, has made it fairly cost effective to simply have your own replica, which you know is going to have everything you need to work in that emergency.

So we can make this SOBR copy job work (assuming my other ticket can get resolved), and likely end up having multiple repositories to do it like we have done in the past to access GFS (combined with ReFS for block cloned monthly as an example), but it's frustrating when this feels like a missed opportunity to have given us a simple but very powerful way to do backup copies into object storage. Which could then have been extended to do things with storage policy and moving blocks to archive cloud side, or even simple things like one repo utilises LRS vs another using ZRS for different virtual machine backups with different requirements. But anyway, we are probably vastly off topic at this point.

TLDR;
V10 can't do a straight backup copy to object storage, you have to use SOBR which IMO over complicates things.

Post Reply

Who is online

Users browsing this forum: No registered users and 4 guests