-
- Service Provider
- Posts: 453
- Liked: 30 times
- Joined: Dec 28, 2014 11:48 am
- Location: The Netherlands
- Contact:
moving virtual machine between jobs in a cloudtier model
Hi,
I was thinking about the use of cloudtier in a SOBR configuration where backup jobs target the same Scale Out Backup Repository ( meaning using the same bucket )
The retention time for backups are defined by job level. Suppose we have a virtual machine that is running in a job for a year. The operational restore window of 14 days is being kept locally and is copied out to the capacity hier. Hence the monthly and yearly backups moved and offloaded to the capacity tier.
After a year we want to review the backup job and conclude that the specific virtual machines has to be moved to the new job with a new retention scheme due to legal policy changes that apply to the virtual machine.
When a virtual machine is moved to a new job, will it make use of the old existing blocks that are used for the monthly and yearly backups in the capacity tier ?
or
Is it inevitable that due to a new chain the first full backup of the virtual machine will claim also new blocks in the cloud tier, resulting in a double usage of the capacity ?
What should be a good strategy for keeping the month and year backups in the cloudtier without impacting the capacity when moving virtual machines between jobs ?
I was thinking about the use of cloudtier in a SOBR configuration where backup jobs target the same Scale Out Backup Repository ( meaning using the same bucket )
The retention time for backups are defined by job level. Suppose we have a virtual machine that is running in a job for a year. The operational restore window of 14 days is being kept locally and is copied out to the capacity hier. Hence the monthly and yearly backups moved and offloaded to the capacity tier.
After a year we want to review the backup job and conclude that the specific virtual machines has to be moved to the new job with a new retention scheme due to legal policy changes that apply to the virtual machine.
When a virtual machine is moved to a new job, will it make use of the old existing blocks that are used for the monthly and yearly backups in the capacity tier ?
or
Is it inevitable that due to a new chain the first full backup of the virtual machine will claim also new blocks in the cloud tier, resulting in a double usage of the capacity ?
What should be a good strategy for keeping the month and year backups in the cloudtier without impacting the capacity when moving virtual machines between jobs ?
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: moving virtual machine between jobs in a cloudtier model
Hello,
new job means new backup chain (the second of your options).
As for now, only thinking in advance can help.
Best regards,
Hannes
new job means new backup chain (the second of your options).
As for now, only thinking in advance can help.
Best regards,
Hannes
-
- Expert
- Posts: 119
- Liked: 11 times
- Joined: Nov 16, 2020 2:58 pm
- Full Name: David Dunworthy
- Contact:
Re: moving virtual machine between jobs in a cloudtier model
Configure deleted items retention in your backup job. This way the vm data will be deleted from both performance tier and capacity tier automatically after x days pass. At least the short term chain anyway. The gfs are left unless you manually tell veeam to delete those I believe.
But still this would help free some of the space up.
But still this would help free some of the space up.
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: moving virtual machine between jobs in a cloudtier model
with V11, GFS restore points will be deleted if the retention time is over.
-
- Expert
- Posts: 119
- Liked: 11 times
- Joined: Nov 16, 2020 2:58 pm
- Full Name: David Dunworthy
- Contact:
Re: moving virtual machine between jobs in a cloudtier model
So you mean the job setting of "deleted items retention" right?
Imagine you have 7 years of data and you have deleted items retention setting configured for 30 days. You move vm to new job expecting only the recent chain to be gone after 30 days.
You lose all that archival data instead of just the recent chain now?
What was the logic in changing this from v10 to this?
Also does it matter or have any impact if say these gfs were all in archive tier now with v11 or same deal it still deletes?
Imagine you have 7 years of data and you have deleted items retention setting configured for 30 days. You move vm to new job expecting only the recent chain to be gone after 30 days.
You lose all that archival data instead of just the recent chain now?
What was the logic in changing this from v10 to this?
Also does it matter or have any impact if say these gfs were all in archive tier now with v11 or same deal it still deletes?
-
- Product Manager
- Posts: 9848
- Liked: 2607 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
- Contact:
Re: moving virtual machine between jobs in a cloudtier model
There is GFS Retention and there is a „remove deleted Item after xxx days“ Retention.
GFS will not be deleted after „remove deleted Item after xxx days“.
The VM will be removed only from the forever incremental chain.
GFS will not be deleted after „remove deleted Item after xxx days“.
The VM will be removed only from the forever incremental chain.
Product Management Analyst @ Veeam Software
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: moving virtual machine between jobs in a cloudtier model
I was referring the
Example: you deleted a backup job. That means retention is never applied in V10. The backup job had configured GFS configured for example 4 weeks, 6 months, 3 years, then these GFS restore points will be deleted after the 4 weeks, 6 months, 3 years. The primary backup chain stays there forever in V11, because it could be configured with restore points (which have no relationship with time)The gfs are left unless you manually tell veeam to delete those I believe.
What's New document wrote: Background GFS retention — GFS full backup retention is now processed independently from the backup job
execution as a background system activity on the Veeam repository. This ensures that the expired full backups
won’t continue to consume repository disk space if the backup job gets disabled for extended time periods.
Orphaned GFS backups retention — The retention policy is now applied to GFS backups that no longer
have a job associated with them, based on their last-known retention policy. This removes the need for
workarounds, such as keeping the no-longer-necessary jobs protecting a single dummy machine.
-
- Expert
- Posts: 119
- Liked: 11 times
- Joined: Nov 16, 2020 2:58 pm
- Full Name: David Dunworthy
- Contact:
Re: moving virtual machine between jobs in a cloudtier model
Mildur,
Thank you. I was talking about the "deleted items retention" and am happy to hear it still works the same and how I expected.
Hannes,
Thank you! Now I understand what you meant. Those two changes in the "whats new" section are both great ideas. I used to wonder about that issue of the job not being there and since retention was processed in the job. Now these changes take care of it.
Thank you. I was talking about the "deleted items retention" and am happy to hear it still works the same and how I expected.
Hannes,
Thank you! Now I understand what you meant. Those two changes in the "whats new" section are both great ideas. I used to wonder about that issue of the job not being there and since retention was processed in the job. Now these changes take care of it.
-
- Expert
- Posts: 119
- Liked: 11 times
- Joined: Nov 16, 2020 2:58 pm
- Full Name: David Dunworthy
- Contact:
Re: moving virtual machine between jobs in a cloudtier model
A little more elaboration on this please.
So I have an on prem vmware environment, and also a vmware cloud on aws too. This means each environment has its own vcenter, so if I migrate a vm from on prem up to vmc, then its moref changes. But either way, I have to start a new job for this vm or throw it into a job that is already using the correct SOBR for the vmc environment, so as mentioned on this thread this means a new chain for the vm and thus redundant data in object storage...
So, my worry is that large file servers will be running along on prem, and collecting gfs points, and putting those into an object storage bucket. Then, I migrate a file server to vmc and throw it into a veeam job up there. I guess overall, I can let the "deleted items retention" take away the short term chain from the on prem sobr/bucket and that way as far as the short term chain it is back to only one copy in object storage. But all the GFS points remain on the first sobr/bucket and now will be building up over time in the new sobr/bucket. Making our billing higher.
I know as you said all we can really do until Veeam has something to manage this maybe, is to "think ahead", but I'm thinking that a lot of customers may be in the process in the next few years of migrating their vms from on prem environments to cloud ones, so this should be something that comes up often.
With all my planning, this is the one thing I didn't consider.
So overall, the only thing that can help is to make sure to use the Archive tier and use deep glacier for stuff that will be kept more than 6 months, as this way it's like $1 per TB. So then all these stale left over gfs points that are in teh first sobr/bucket are as cheap as possible. That way redundant data in two buckets is not so expensive... Also, I could leave the immutability for the archive tier itself OFF and then I am able to delete those gfs points if the company deems it ok. (but then I'm charged api for deletions that are expensive with glacier I think)
If you think of this scenario of on prem environments migrating to cloud ones, and the way veeam will be duplicating data since there is no global bucket dedupe. Is this something Veeam has thought about and might have a roadmap on how to deal with?
I could have used only one sobr and bucket, but the problem with that design is that I had dedicated repositories in the cloud environment, and also in the on prem environment. It did not make sense to make one sobr where veeam might place backups from the cloud environment into on prem repos (costing egress) and vice versa..... The two sobr design allowed all backups to go to correct cheapest places and still be offloaded to s3 buckets, while all being managed by one single b&r server.
So I have an on prem vmware environment, and also a vmware cloud on aws too. This means each environment has its own vcenter, so if I migrate a vm from on prem up to vmc, then its moref changes. But either way, I have to start a new job for this vm or throw it into a job that is already using the correct SOBR for the vmc environment, so as mentioned on this thread this means a new chain for the vm and thus redundant data in object storage...
So, my worry is that large file servers will be running along on prem, and collecting gfs points, and putting those into an object storage bucket. Then, I migrate a file server to vmc and throw it into a veeam job up there. I guess overall, I can let the "deleted items retention" take away the short term chain from the on prem sobr/bucket and that way as far as the short term chain it is back to only one copy in object storage. But all the GFS points remain on the first sobr/bucket and now will be building up over time in the new sobr/bucket. Making our billing higher.
I know as you said all we can really do until Veeam has something to manage this maybe, is to "think ahead", but I'm thinking that a lot of customers may be in the process in the next few years of migrating their vms from on prem environments to cloud ones, so this should be something that comes up often.
With all my planning, this is the one thing I didn't consider.
So overall, the only thing that can help is to make sure to use the Archive tier and use deep glacier for stuff that will be kept more than 6 months, as this way it's like $1 per TB. So then all these stale left over gfs points that are in teh first sobr/bucket are as cheap as possible. That way redundant data in two buckets is not so expensive... Also, I could leave the immutability for the archive tier itself OFF and then I am able to delete those gfs points if the company deems it ok. (but then I'm charged api for deletions that are expensive with glacier I think)
If you think of this scenario of on prem environments migrating to cloud ones, and the way veeam will be duplicating data since there is no global bucket dedupe. Is this something Veeam has thought about and might have a roadmap on how to deal with?
I could have used only one sobr and bucket, but the problem with that design is that I had dedicated repositories in the cloud environment, and also in the on prem environment. It did not make sense to make one sobr where veeam might place backups from the cloud environment into on prem repos (costing egress) and vice versa..... The two sobr design allowed all backups to go to correct cheapest places and still be offloaded to s3 buckets, while all being managed by one single b&r server.
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: moving virtual machine between jobs in a cloudtier model
just because I'm curious... everything I heard from customers about VMware running in Amazon / Google / Microsoft datacenters is, that it is very expensive... so I ask myself, how relevant the costs of object storage are in relation to the hypervisor costs?
Yes, moving GFS backups to archive / glacier makes sense if the retention is long enough.
Veeam also applies retention to GFS restore points in V11 as I mentioned before.
global deduplication is something that is against one of our key features: self-contained backups. So there are no plans for that.
Yes, moving GFS backups to archive / glacier makes sense if the retention is long enough.
Veeam also applies retention to GFS restore points in V11 as I mentioned before.
global deduplication is something that is against one of our key features: self-contained backups. So there are no plans for that.
-
- Expert
- Posts: 119
- Liked: 11 times
- Joined: Nov 16, 2020 2:58 pm
- Full Name: David Dunworthy
- Contact:
Re: moving virtual machine between jobs in a cloudtier model
Yes it is probably a tenth per year or less that we are paying for the object storage cost vs the actual running of the cloud production environment.
But each project has its own budget. I did not account for doubling the data kept in object storage as a result of this migration to cloud so it was not budgeted.
I can understand the philosophy of self contained and portable.
I will just have to hope that the glacier and deep glacier make it not noticeable enough to be a huge issue.
But each project has its own budget. I did not account for doubling the data kept in object storage as a result of this migration to cloud so it was not budgeted.
I can understand the philosophy of self contained and portable.
I will just have to hope that the glacier and deep glacier make it not noticeable enough to be a huge issue.
Who is online
Users browsing this forum: No registered users and 6 guests