-
- Enthusiast
- Posts: 42
- Liked: 5 times
- Joined: May 17, 2018 2:22 pm
- Full Name: grant albitz
- Contact:
veeam 10 copy to cloud seems very limited
So I may just be missing something here. But from what I can tell this feature is extremely limited even still in v10.
Here is what I want to accomplish, and I would think most MSP organizations would have similar goals.
1. Normal veeam backup of machines, In this case its a vsphere job. We keep around 30 retention points locally.
2. Copy of critical systems backup to s3/wasabi (wasabi in my case), only 2-3 retention points. We are only going to use the cloud recovery if something happens locally, IE ransomware etc.
Here is what i tried so far.
1. Created a wasabi backup repository. - No issues
2. Attempted to create a backup copy job and target the wasabi backup repository, no luck. not an option to select an object s3 repo, only see my local (created a second one since initial list was blank)
3. Created a scale out repository. I see the option to extend scaleout backup repository capacity with object storage. I select the repository in question. I see the option to copy to object storage, but now if i do this everything I backup to this repo will automatically go to S3. Thats kinda the idea but truthfully there is only one repo for most of my existing customers. I feel that the s3 object copy should be a job setting not a repo setting. Ignoring this fact and pretending its ok that all jobs on this repo go to the cloud I have my next problem.
4. There does not seem to be retention settings for how many restore points you want to keep for the s3 storage specifically. I only see retention policies enforced at the job level, and that applies to both cloud and on prem.
Am i missing something here? Is there a reason a backup copy job still cant target a s3 repo in v10?
Here is what I want to accomplish, and I would think most MSP organizations would have similar goals.
1. Normal veeam backup of machines, In this case its a vsphere job. We keep around 30 retention points locally.
2. Copy of critical systems backup to s3/wasabi (wasabi in my case), only 2-3 retention points. We are only going to use the cloud recovery if something happens locally, IE ransomware etc.
Here is what i tried so far.
1. Created a wasabi backup repository. - No issues
2. Attempted to create a backup copy job and target the wasabi backup repository, no luck. not an option to select an object s3 repo, only see my local (created a second one since initial list was blank)
3. Created a scale out repository. I see the option to extend scaleout backup repository capacity with object storage. I select the repository in question. I see the option to copy to object storage, but now if i do this everything I backup to this repo will automatically go to S3. Thats kinda the idea but truthfully there is only one repo for most of my existing customers. I feel that the s3 object copy should be a job setting not a repo setting. Ignoring this fact and pretending its ok that all jobs on this repo go to the cloud I have my next problem.
4. There does not seem to be retention settings for how many restore points you want to keep for the s3 storage specifically. I only see retention policies enforced at the job level, and that applies to both cloud and on prem.
Am i missing something here? Is there a reason a backup copy job still cant target a s3 repo in v10?
-
- Product Manager
- Posts: 14837
- Liked: 3083 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: veeam 10 copy to cloud seems very limited
Hello,
and welcome to the forum.
But of cause: your use case is valid. We also hear it from customers. You did not miss anything. Today there is no option to write directly to object storage (backup job / backup copy job).
Best regards,
Hannes
and welcome to the forum.
our goal was to have a very simple "single checkbox copy policy". your use case is by far more complex as you have to think about "which VM for how many restore points", then create extra jobs for that... that was not our design goal.Copy of critical systems backup to s3/wasabi (wasabi in my case), only 2-3 retention points. We are only going to use the cloud recovery if something happens locally, IE ransomware etc.
But of cause: your use case is valid. We also hear it from customers. You did not miss anything. Today there is no option to write directly to object storage (backup job / backup copy job).
Best regards,
Hannes
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: veeam 10 copy to cloud seems very limited
The best idea in your case will be to:
* create object storage repository
* create additional SOBR
* add the object storage repository as Capacity Tier to it
* enable copy mode
* point backup copy job containing mission critical VMs to the SOBR
This way, you will have only restore points of mission critical VMs copied to object storage.
Thanks!
* create object storage repository
* create additional SOBR
* add the object storage repository as Capacity Tier to it
* enable copy mode
* point backup copy job containing mission critical VMs to the SOBR
This way, you will have only restore points of mission critical VMs copied to object storage.
Thanks!
-
- Enthusiast
- Posts: 42
- Liked: 5 times
- Joined: May 17, 2018 2:22 pm
- Full Name: grant albitz
- Contact:
Re: veeam 10 copy to cloud seems very limited
Thank for your quick reply. And I am just following up to better understand the limitations and approach taken.
I guess my biggest level of confusion is that the copy to cloud functionality exists within the repo, not the backup job. This creates a fairly significant issue.
Right now say I have a customer that has 1tb used space in their production environment and a 3-4tb repo. This has served them well and given them a fair amount of retention points locally. Now we wish to back up some of their data to the cloud.
It seems the only way at this moment would be to create a separate backup job, with a separate repo and configure this repo to do the copy to cloud function, and reserve that repo for only items we want to copy to the cloud. Without purchasing more local storage we are going to need to reduce their primary repo space by 1-1.5tb of space so that we have this repo with enough storage space for the initial backup job as well as a say 3 retention points. This obviously leads to a weird and inefficient use of the local storage but I believe it would work:
Backup job 1. Local only, to the initial but resized ~2tb repo set with however retention points we can get.
Backup job 2. Local to the new 1.5tb scale out repo that has the immediate copy to cloud functionality enabled. 3 retention points.
Obviously this scenario is less than optimal, it requires 2 full backups for the same data and a really bad manually partitioned scheme for the repos that is sure to result in one repo being full and the other having unused space. I am obviously not able to explain the intricacies of the technical challenges you face on the back end but it would seem atleast somewhat beneficial if the backup job could have the copy to cloud functionality setting associated to it so that we could accomplish the 2 backup job scenario but rely on 1 single repo for both jobs.
Maybe i dont need two different jobs, but i believe i may since the goal here is not to keep the cloud bound data around for more than a few copies, were as locally we want to keep as many as we can. I get that local storage is cheap, but the whole cloud via a scale out repo seems really broken and limited.
Why is the copy to cloud functionality so closely tied to the scale out repo? If we had the option to do a backup copy job directly to the s3 cloud storage with its own retention setting it seemingly solve these requirements.
I guess my biggest level of confusion is that the copy to cloud functionality exists within the repo, not the backup job. This creates a fairly significant issue.
Right now say I have a customer that has 1tb used space in their production environment and a 3-4tb repo. This has served them well and given them a fair amount of retention points locally. Now we wish to back up some of their data to the cloud.
It seems the only way at this moment would be to create a separate backup job, with a separate repo and configure this repo to do the copy to cloud function, and reserve that repo for only items we want to copy to the cloud. Without purchasing more local storage we are going to need to reduce their primary repo space by 1-1.5tb of space so that we have this repo with enough storage space for the initial backup job as well as a say 3 retention points. This obviously leads to a weird and inefficient use of the local storage but I believe it would work:
Backup job 1. Local only, to the initial but resized ~2tb repo set with however retention points we can get.
Backup job 2. Local to the new 1.5tb scale out repo that has the immediate copy to cloud functionality enabled. 3 retention points.
Obviously this scenario is less than optimal, it requires 2 full backups for the same data and a really bad manually partitioned scheme for the repos that is sure to result in one repo being full and the other having unused space. I am obviously not able to explain the intricacies of the technical challenges you face on the back end but it would seem atleast somewhat beneficial if the backup job could have the copy to cloud functionality setting associated to it so that we could accomplish the 2 backup job scenario but rely on 1 single repo for both jobs.
Maybe i dont need two different jobs, but i believe i may since the goal here is not to keep the cloud bound data around for more than a few copies, were as locally we want to keep as many as we can. I get that local storage is cheap, but the whole cloud via a scale out repo seems really broken and limited.
Why is the copy to cloud functionality so closely tied to the scale out repo? If we had the option to do a backup copy job directly to the s3 cloud storage with its own retention setting it seemingly solve these requirements.
-
- Veeam Software
- Posts: 492
- Liked: 175 times
- Joined: Jul 21, 2015 12:38 pm
- Full Name: Dustin Albertson
- Contact:
Re: veeam 10 copy to cloud seems very limited
@galbitz You can do a backup copy job to cloud tier. Create a new SOBR and point a backup copy job to it and have it offload to S3.
Dustin Albertson | Director of Product Management - Cloud & Applications | Veeam Product Management, Alliances
-
- Enthusiast
- Posts: 42
- Liked: 5 times
- Joined: May 17, 2018 2:22 pm
- Full Name: grant albitz
- Contact:
Re: veeam 10 copy to cloud seems very limited
Thanks, what are the sizing guidelines for a SOBR that is really only meant to be used as a s3 copy. I assume its not possible to undersize this and it must be large enough to hold the initial full locally + any restore points.
-
- Veeam Software
- Posts: 492
- Liked: 175 times
- Joined: Jul 21, 2015 12:38 pm
- Full Name: Dustin Albertson
- Contact:
Re: veeam 10 copy to cloud seems very limited
The sizing guidelines i would follow are the same as for any repo. Depending upon the type of tiering you choose (copy or move or both) will dictate how much space will be needed. But i would size off of the normal sizing guidelines and not try to estimate the minimum usage as that would be harder to dictate due to potential issues you may run into, like bandwidth, network issues, time constraints etc and that being undersized would cause the job to error out. So if you have to space available oversize it a bit and then you can be safer than sorry.
Dustin Albertson | Director of Product Management - Cloud & Applications | Veeam Product Management, Alliances
-
- Enthusiast
- Posts: 42
- Liked: 5 times
- Joined: May 17, 2018 2:22 pm
- Full Name: grant albitz
- Contact:
Re: veeam 10 copy to cloud seems very limited
Just a follow up after some testing.
I created a small scale out repo that is set to immediately copy data to object storage.
I created a file share backup job, backed up about 3gb of data. I created a secondary target in this job to the scale out repo indicated above. This in turn created the associated backup copy job. After running the backup I see that the backup copy job successfully copied it to my local scale out repo that has the object storage associated. There was no files uploaded to the object storage (wasabi) I do see that it created a veeam/archive path but nothing further.
I created a small scale out repo that is set to immediately copy data to object storage.
I created a file share backup job, backed up about 3gb of data. I created a secondary target in this job to the scale out repo indicated above. This in turn created the associated backup copy job. After running the backup I see that the backup copy job successfully copied it to my local scale out repo that has the object storage associated. There was no files uploaded to the object storage (wasabi) I do see that it created a veeam/archive path but nothing further.
-
- Product Manager
- Posts: 14837
- Liked: 3083 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: veeam 10 copy to cloud seems very limited
that's correct. object storage with NAS backups is only supported for long-term retention of the primary repository.There was no files uploaded to the object storage
capacity tier of SOBR will be ignored for NAS backup (copy) jobs.
-
- Service Provider
- Posts: 18
- Liked: 4 times
- Joined: Jul 14, 2014 8:49 am
- Full Name: Ross Fawcett
- Location: Perth, Western Australia
- Contact:
Re: veeam 10 copy to cloud seems very limited
My read between the lines take on this has been, they want you to use cloud connect if you need that kind of simple functionality, and if they added such a simple feature like this to the product it would have ruined many of the cloud connect providers (read partner network) who provide simple offsite backup storage using it. It is frustrating that in this day and age we don't have that functionality which has been requested for years.
Hardly any more complex than the standard backup policies. I'd argue this whole SOBR design with copied extents is far more complicated than a simple backup copy job to S3/Azure blob given you also say it works a lot like ReFS and block clone. Not to mention you offer the simple backup copy job functionality to your cloud connect providers..HannesK wrote: ↑Feb 19, 2020 6:24 am our goal was to have a very simple "single checkbox copy policy". your use case is by far more complex as you have to think about "which VM for how many restore points", then create extra jobs for that... that was not our design goal.
But of cause: your use case is valid. We also hear it from customers. You did not miss anything. Today there is no option to write directly to object storage (backup job / backup copy job).
That's a very roundabout way of achieving it, because your SOBR will still need to have space on prem first to stage it, e.g. I now have to have two sets of the same backups on prem before I can replicate one to Azure blob. Granted you could use the move mode to offload the blocks rather than copy mode but it's still very much a workaround rather than a simple backup copy job to Azure/S3 blob directly. And move relies on sealed chains which has its own set of issues.dalbertson wrote: ↑Feb 19, 2020 1:46 pm @galbitz You can do a backup copy job to cloud tier. Create a new SOBR and point a backup copy job to it and have it offload to S3.
Don't get me wrong, the Veeam backup product is amazing, it totally blew away the competition for virtual machine backup and has been my go to product for backup for years (I think around version 4-5 was when we started using it). But the lack of a simple backup copy job to Azure/S3 blob is pretty disappointing. And it's clear from the confusion we constantly see both here on the Veeam forums, across Reddit, twitter and anywhere people talk about the new blob storage support, that people were expecting it to be more like a simple copy job. E.g. expecting it to be the simple but powerful functionality we have come to rely on from Veeam.
-
- Veeam Software
- Posts: 492
- Liked: 175 times
- Joined: Jul 21, 2015 12:38 pm
- Full Name: Dustin Albertson
- Contact:
Re: veeam 10 copy to cloud seems very limited
A backup copy job can be sent to a SOBR and tiered off to object. (Tiering closed chain)
In fairness what people expect and want are usually unknown to them until a feature is announced or released. I can’t count the times that I have run into conversations where people want to do things without fully thinking the solution out.
Many want to move to object because it’s “cheap” but just coping or moving a traditional backup file doesn’t make sense In this situation as the file is not designed for that. Plus when veeam approaches a topic it’s thought out from many different points of view and encompasses the entire picture. Meaning that getting data to object is easy, but what happens when you need to restore, what happens when we write data, how do we keep the writes down, how do we avoid egress, how do we make restores efficient, how can we make it responsive when you need it. What about how we index data, how do we make it simple for the end user? I could keep going on and on.
What v10 is for me is a period of time where we are at a juncture between two worlds, one on prem (legacy) and one in the cloud. It’s a period of learning for everyone when these new features are out, but look at our history of innovation and think back to when those features and products were released. Change is constant and new ways of doing things will need to be adapted. Think about how we used to backup to tape directly and then sent that to a vault.
That’s just my thoughts and doesn’t mean there right or wrong. I’m purely speaking from my POV and not anyone else.
Edit *** - if the point is to get a backup copy to object then what’s wrong with copy mode and doing it all in one job? It’s a change in strategy not a loss of features.
There always comes a point where new technology requires a change in status quo
In fairness what people expect and want are usually unknown to them until a feature is announced or released. I can’t count the times that I have run into conversations where people want to do things without fully thinking the solution out.
Many want to move to object because it’s “cheap” but just coping or moving a traditional backup file doesn’t make sense In this situation as the file is not designed for that. Plus when veeam approaches a topic it’s thought out from many different points of view and encompasses the entire picture. Meaning that getting data to object is easy, but what happens when you need to restore, what happens when we write data, how do we keep the writes down, how do we avoid egress, how do we make restores efficient, how can we make it responsive when you need it. What about how we index data, how do we make it simple for the end user? I could keep going on and on.
What v10 is for me is a period of time where we are at a juncture between two worlds, one on prem (legacy) and one in the cloud. It’s a period of learning for everyone when these new features are out, but look at our history of innovation and think back to when those features and products were released. Change is constant and new ways of doing things will need to be adapted. Think about how we used to backup to tape directly and then sent that to a vault.
That’s just my thoughts and doesn’t mean there right or wrong. I’m purely speaking from my POV and not anyone else.
Edit *** - if the point is to get a backup copy to object then what’s wrong with copy mode and doing it all in one job? It’s a change in strategy not a loss of features.
There always comes a point where new technology requires a change in status quo
Dustin Albertson | Director of Product Management - Cloud & Applications | Veeam Product Management, Alliances
-
- Service Provider
- Posts: 18
- Liked: 4 times
- Joined: Jul 14, 2014 8:49 am
- Full Name: Ross Fawcett
- Location: Perth, Western Australia
- Contact:
Re: veeam 10 copy to cloud seems very limited
I understand that, that was in the 9.5u4 release. And even then people complained then that whilst it had value, that they still wanted to effectively replicate jobs straight into object based storage. Copy job allows this in a round about way.dalbertson wrote: ↑ A backup copy job can be sent to a SOBR and tiered off to object. (Tiering closed chain)
The fact there are white papers demonstrating how to integrate object storage as a VTL implies that it's something people have wanted to do. Granted at that stage it was in some ways treated as a way to utilise things like Glacier as a tape equivalent, but the general idea of being able to simply run a copy job to replicate a backup offsite to object storage has been around for some time.dalbertson wrote: ↑ In fairness what people expect and want are usually unknown to them until a feature is announced or released. I can’t count the times that I have run into conversations where people want to do things without fully thinking the solution out.
Absolutely there is a cost perspective to it, and yes picking up the file as is and dropping it into object storage would not make sense given the transaction and API costs. So I do appreciate the advantage that Veeam has in the way the object storage works, e.g. in a way it's a lot like what can be done with ReFS where blocks can be cloned etc (notwithstanding my other ticket about it not working as expected). However would you not agree that having to have a SOBR to achieve a simple backup copy job into object storage seems to be over complicating it? I don't disagree that SOBR also has value, but there is something to be said to allow the flexibility of a standard backup copy job to object storage like what I could do today using a copy job to an ReFS based repository.dalbertson wrote: ↑ Many want to move to object because it’s “cheap” but just coping or moving a traditional backup file doesn’t make sense In this situation as the file is not designed for that. Plus when veeam approaches a topic it’s thought out from many different points of view and encompasses the entire picture. Meaning that getting data to object is easy, but what happens when you need to restore, what happens when we write data, how do we keep the writes down, how do we avoid egress, how do we make restores efficient, how can we make it responsive when you need it. What about how we index data, how do we make it simple for the end user? I could keep going on and on.
I agree that we are between two worlds, though I wouldn't call on prem legacy by any means. Software as a service definitely makes sense in the cloud, but platform or infrastructure in the cloud isn't always the right fit, both from a technology perspective, and a cost perspective. As much as we'd all love to have unlimited budgets, cloud is not necessarily cheaper depending on how it is approached. We spend a lot of time working with customers to ensure that whether they are cloud or on premise that it ultimately meets their business requirements, but this has many facets such as performance (think network latency/access), features (many SaaS versions of the on prem product still do not have feature parity), security (though this falls more into compliance like the fact you may not be able to show exactly where all your data is to meet some customer/legislative requirements), and of course at the end of the day cost.dalbertson wrote: ↑ What v10 is for me is a period of time where we are at a juncture between two worlds, one on prem (legacy) and one in the cloud. It’s a period of learning for everyone when these new features are out, but look at our history of innovation and think back to when those features and products were released. Change is constant and new ways of doing things will need to be adapted. Think about how we used to backup to tape directly and then sent that to a vault.
I guess the issue for us is that most of our customers were using cloud connect simply to have a copy of their backup offsite. And that we can do simple copy jobs to meet customer requirements quite easily through that solution, but that we can't do it with object storage without going through a convoluted process with SOBR makes it very much feel like protecting the cloud connect partners rather than providing us with a simple mechanism to push a backup offsite. And whilst I acknowledge that there are benefits to cloud connect for DRaaS, but for many customers DR doesn't actually mean a fully replica VM somewhere. And for those that do want that level of replication, often they are wanting replication levels that Veeam cannot meet with the snapshot based replication design. Plus to be honest, there is also a very big cost factor in maintaining replicas in this way, not saying cloud doesn't have advantages in not having to maintain hardware, but the density of compute we have today, has made it fairly cost effective to simply have your own replica, which you know is going to have everything you need to work in that emergency.
So we can make this SOBR copy job work (assuming my other ticket can get resolved), and likely end up having multiple repositories to do it like we have done in the past to access GFS (combined with ReFS for block cloned monthly as an example), but it's frustrating when this feels like a missed opportunity to have given us a simple but very powerful way to do backup copies into object storage. Which could then have been extended to do things with storage policy and moving blocks to archive cloud side, or even simple things like one repo utilises LRS vs another using ZRS for different virtual machine backups with different requirements. But anyway, we are probably vastly off topic at this point.
TLDR;
V10 can't do a straight backup copy to object storage, you have to use SOBR which IMO over complicates things.
-
- Enthusiast
- Posts: 44
- Liked: 5 times
- Joined: Apr 09, 2015 8:33 pm
- Full Name: Simon Chan
- Contact:
Re: veeam 10 copy to cloud seems very limited
Just wanted to voice my support here as well in hoping future versions of Veeam would allow immediate copy of restores points to object storage on a per-job level, not per SOBR.
As a cloud provider, we have many client backups stored in one giant SOBR. From my understanding, that is the whole point of SOBR's in the first place being a logical entity built on top of backend extents in that we as admins can easily extend the SOBR very easily. I'm extremely happy with Veeam's object storage integration but not so much in how it is executed or forced on a per SOBR level. Some of our clients may not necessarily want their backup copies to be store in the cloud. However, creating extra SOBR's just for these clients adds additional overhead in management and can complicate things.
By allowing this on per-job level instead, we can get much more granular. Obviously the options as they are now can still remain for those who want it but I'm really, really hoping this will be possible soon. As it stands now, I really want to use Wasabi storage to automatically have certain backup jobs get copied there. This will eliminate the need to create a separate backup copy job, SOBR, etc.
As a cloud provider, we have many client backups stored in one giant SOBR. From my understanding, that is the whole point of SOBR's in the first place being a logical entity built on top of backend extents in that we as admins can easily extend the SOBR very easily. I'm extremely happy with Veeam's object storage integration but not so much in how it is executed or forced on a per SOBR level. Some of our clients may not necessarily want their backup copies to be store in the cloud. However, creating extra SOBR's just for these clients adds additional overhead in management and can complicate things.
By allowing this on per-job level instead, we can get much more granular. Obviously the options as they are now can still remain for those who want it but I'm really, really hoping this will be possible soon. As it stands now, I really want to use Wasabi storage to automatically have certain backup jobs get copied there. This will eliminate the need to create a separate backup copy job, SOBR, etc.
-
- Veteran
- Posts: 643
- Liked: 312 times
- Joined: Aug 04, 2019 2:57 pm
- Full Name: Harvey
- Contact:
Re: veeam 10 copy to cloud seems very limited
Hey Simon,
I'm not running any Veeam Provider services (our shop is more MSP with hands-on work instead of being a cloud provider), but we have just been doing "Enable Capacity Tier, but don't enable Move/Copy" and then scripting the moves to Capacity Tier for clients who want such a set up.
I'll need to see if I can get a demo license for Cloud Connect to test this, but is there any reason this wouldn't work for your setup? Granted, it's not automated or granular (as it should be imo), but doesn't this meet your need exactly? This is the cmdlet I have in mind and while I'm not sure if it does the same for Service Providers, for me it's been invaluable. I just feed a simple csv of client-essential backups to a script we wrote internally and it kicks the backups up to whatever S3 provider they choose.
Does this work in your environment?
I'm not running any Veeam Provider services (our shop is more MSP with hands-on work instead of being a cloud provider), but we have just been doing "Enable Capacity Tier, but don't enable Move/Copy" and then scripting the moves to Capacity Tier for clients who want such a set up.
I'll need to see if I can get a demo license for Cloud Connect to test this, but is there any reason this wouldn't work for your setup? Granted, it's not automated or granular (as it should be imo), but doesn't this meet your need exactly? This is the cmdlet I have in mind and while I'm not sure if it does the same for Service Providers, for me it's been invaluable. I just feed a simple csv of client-essential backups to a script we wrote internally and it kicks the backups up to whatever S3 provider they choose.
Does this work in your environment?
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: veeam 10 copy to cloud seems very limited
This cmdlet won't work out of the box, because Service Provider cannot see tenants' backup neither via UI, nor via PowerShell (or at least via not regular set of VBR PS cmdlets).
That's said, there still must be a creative workarounds for getting tenant's backups via SP's PowerShell. So, let's discuss those separately, if that's what you end up with.
Thanks!
That's said, there still must be a creative workarounds for getting tenant's backups via SP's PowerShell. So, let's discuss those separately, if that's what you end up with.
Thanks!
-
- Enthusiast
- Posts: 44
- Liked: 5 times
- Joined: Apr 09, 2015 8:33 pm
- Full Name: Simon Chan
- Contact:
Re: veeam 10 copy to cloud seems very limited
Thanks for chiming in and helping out Harvey! We are also an MSP and host a lot of VMs for clients in our private cloud. Funny you mentioned this because just yesterday, I was playing around some more with object storage but was only able to create the capacity tier without enabling the "Move/Copy" option as you have mentioned. I was just trying to see if this was possible as I knew I would be stuck in the same situation even if it was possible due to not being able to configure this on a per-job basis.soncscy wrote: ↑Apr 26, 2020 7:57 pm Hey Simon,
I'm not running any Veeam Provider services (our shop is more MSP with hands-on work instead of being a cloud provider), but we have just been doing "Enable Capacity Tier, but don't enable Move/Copy" and then scripting the moves to Capacity Tier for clients who want such a set up.
I'll need to see if I can get a demo license for Cloud Connect to test this, but is there any reason this wouldn't work for your setup? Granted, it's not automated or granular (as it should be imo), but doesn't this meet your need exactly? This is the cmdlet I have in mind and while I'm not sure if it does the same for Service Providers, for me it's been invaluable. I just feed a simple csv of client-essential backups to a script we wrote internally and it kicks the backups up to whatever S3 provider they choose.
Does this work in your environment?
However, I've failed to completely look to seeing if this was possible with a PS script! I'm browsing through that link right now but I don't see the command to Copy. I don't want to Move because we are using Forever Incremental without any synthetic/active fulls. I will continue to look. Appreciate your input.
@veremin
Would it be possible to script it out with a Copy rather than a Move? This would be extremely helpful. Ideally it would run on a schedule interval and copy all new files up to the capacity tier for only those specified jobs.
Thanks.
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: veeam 10 copy to cloud seems very limited
Haven't tested it personally, but seriously doubt that it will work - the underlying mechanics were implemented with the different logic in mind (tiering on SOBR-level), so, even PowerShell is unlikely to be an answer here. Thanks!
-
- Service Provider
- Posts: 16
- Liked: 2 times
- Joined: May 07, 2015 6:58 pm
- Full Name: John Massarello
- Contact:
Re: veeam 10 copy to cloud seems very limited
I'm in the same boat as some of the earlier posters, but from the MSP side. We purchased an S3 compatible object storage platform to use with another (competitor to Veeam that very easily allows per-job configuration copies directly to S3/compatible storage) solution. We had hoped we would be able to do the same with our Veeam Cloud Connect customers based on how the enhancements in object storage were marketed by Veeam. We, as most, were disappointed. The notion that as a service provider, if a customer wants to send copies to object storage, they must either pay us more to have a copy job landing zone, or consume/buy more on-premise storage for a landing zone is honestly crazy. In most cases we present this option to our customers, along with the simpler solution using the competitors software, they usually just spend a little more on the competitor. It ends up being cheaper to pay a slightly higher licensing cost than having all the extra storage needed to accomplish the same thing with Veeam.
All that said, one user noted that this was assumed to be Veeam's way to help their service providers that are offering cloud connect. And if those SPs are using traditional RAID storage solutions, sure maybe they accomplished that. But as an SP I am also now not even able to make use of these enhancements without causing my customer to need significantly more resources, either locally or paying me for that. Most won't do it. Additionally, the argument that I've seen in other posts was that Veeam didn't want to do this because "think of the restore times". I haven't spoken to an IT administrator that doesn't already recognize that restoring from cloud and/or object-storage is painful. Regardless of how this was thought through, or what the intentions were, it fails end users and service providers.
All that said, one user noted that this was assumed to be Veeam's way to help their service providers that are offering cloud connect. And if those SPs are using traditional RAID storage solutions, sure maybe they accomplished that. But as an SP I am also now not even able to make use of these enhancements without causing my customer to need significantly more resources, either locally or paying me for that. Most won't do it. Additionally, the argument that I've seen in other posts was that Veeam didn't want to do this because "think of the restore times". I haven't spoken to an IT administrator that doesn't already recognize that restoring from cloud and/or object-storage is painful. Regardless of how this was thought through, or what the intentions were, it fails end users and service providers.
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: veeam 10 copy to cloud seems very limited
That's not correct. The customer can just add your object storage as their Capacity Tier. In this case, neither landing zone on your end, nor more on-prem storage on their end is needed.
-
- Service Provider
- Posts: 16
- Liked: 2 times
- Joined: May 07, 2015 6:58 pm
- Full Name: John Massarello
- Contact:
Re: veeam 10 copy to cloud seems very limited
Thanks for the reply Gostev. I do hope I'm wrong about that If they simply add our object storage as a capacity tier, how is the retention controlled? That is, if they select both move and copy, how do they specify how many (or for how long) the inactive chains are stored on our object storage?
-
- Chief Product Officer
- Posts: 31804
- Liked: 7298 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: veeam 10 copy to cloud seems very limited
The move policy threshold is specified in their scale-out backup repository settings, namely on the Capacity Tier step.
While overall data retention policy is still specified on the backup job (or backup copy job) level, as it has always been. The retention policy is defined by your clients' business requirements, while the backup target is quite a perpendicular topic really. For example, changing one backup repository to another in the job does not magically change business requirements around data retention policies.
Scale-out backup repository with Capacity Tier merely helps to optimize long-term backup storage costs, and simplify compliance with the 3-2-1 rule of backup. But other than that, it is still just a backup target, even if a very smart one due to its advanced data management logic. So don't think of it any differently than how you think about a regular Veeam backup repository.
While overall data retention policy is still specified on the backup job (or backup copy job) level, as it has always been. The retention policy is defined by your clients' business requirements, while the backup target is quite a perpendicular topic really. For example, changing one backup repository to another in the job does not magically change business requirements around data retention policies.
Scale-out backup repository with Capacity Tier merely helps to optimize long-term backup storage costs, and simplify compliance with the 3-2-1 rule of backup. But other than that, it is still just a backup target, even if a very smart one due to its advanced data management logic. So don't think of it any differently than how you think about a regular Veeam backup repository.
-
- Service Provider
- Posts: 49
- Liked: 3 times
- Joined: Apr 24, 2009 10:16 pm
- Contact:
Re: veeam 10 copy to cloud seems very limited
I am also trying to implement the 3-2-1 rule.
I have one primary repository where I keep a reasonable amount of restore points plus one secondary off-site repository including all restore points from the primary repository and additional GFS restore points for archival purposes. Now I want to protect the most recent restore points from deletion by using the object lock feature in S3. I do not want to copy ALL data to the cloud, just the most recent restore points. However, this is not possible due to limitations mentioned above. I have now learnt that I have to create a new backup job targeting a new SOBR as a landing zone, but that seems like a inefficient workaround that needs a lot of disk space on the performance tier.
Feature Request: Specific retention policy for restore points *copied* to the capacity tier.
I have one primary repository where I keep a reasonable amount of restore points plus one secondary off-site repository including all restore points from the primary repository and additional GFS restore points for archival purposes. Now I want to protect the most recent restore points from deletion by using the object lock feature in S3. I do not want to copy ALL data to the cloud, just the most recent restore points. However, this is not possible due to limitations mentioned above. I have now learnt that I have to create a new backup job targeting a new SOBR as a landing zone, but that seems like a inefficient workaround that needs a lot of disk space on the performance tier.
Feature Request: Specific retention policy for restore points *copied* to the capacity tier.
-
- Veteran
- Posts: 643
- Liked: 312 times
- Joined: Aug 04, 2019 2:57 pm
- Full Name: Harvey
- Contact:
Re: veeam 10 copy to cloud seems very limited
Hey MrSpock,
I'm a bit confused -- with Copy mode, when I enable it, I'm asked if I want all points or just latest -- isn't that exactly what you want?
Try disabling Copy Mode, Saving, then Enabling Copy mode.
I'm a bit confused -- with Copy mode, when I enable it, I'm asked if I want all points or just latest -- isn't that exactly what you want?
Try disabling Copy Mode, Saving, then Enabling Copy mode.
-
- Service Provider
- Posts: 49
- Liked: 3 times
- Joined: Apr 24, 2009 10:16 pm
- Contact:
Re: veeam 10 copy to cloud seems very limited
soncscy,
I think that question refers to restore points already existing in the repository when you enable copy mode. Read step 4 in help. The capacity tier will use the same retention policy as the copy job.
I think that question refers to restore points already existing in the repository when you enable copy mode. Read step 4 in help. The capacity tier will use the same retention policy as the copy job.
-
- Product Manager
- Posts: 20400
- Liked: 2298 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: veeam 10 copy to cloud seems very limited
Correct, the said option is not suitable here, as after initial selection Capacity Tier will gradually catch up with Performance Tier in terms of stored restore points, while you seem to be after separate retention for object stored in Capacity Tier (copy and preserve only most recent restore points).
So, thank you for the feature request; appreciated.
So, thank you for the feature request; appreciated.
-
- Lurker
- Posts: 2
- Liked: 1 time
- Joined: Jun 29, 2020 10:56 am
- Full Name: Nico Baumgartner
- Contact:
Who is online
Users browsing this forum: No registered users and 15 guests