-
- Enthusiast
- Posts: 29
- Liked: 1 time
- Joined: May 06, 2015 9:36 pm
- Location: USA
- Contact:
Copy jobs to Wasabi on version 12
Hi everyone,
We currently have a copy job pointing to a SOBR with Wasabi as the capacity tier for our GFS points on Veeam 11. We are thinking about going direct to Wasabi for the GFS points and eliminating the performance tier once we upgrade to version 12. Our concern is that the GFS points will take a long time to create since it has to do a synthetic full over the Internet. We have seen performance issues even on NAS storage accessed via LAN. Is anyone currently doing copies direct to object storage in version 12? If so, are you seeing any performance issues?
Thanks for the input.
We currently have a copy job pointing to a SOBR with Wasabi as the capacity tier for our GFS points on Veeam 11. We are thinking about going direct to Wasabi for the GFS points and eliminating the performance tier once we upgrade to version 12. Our concern is that the GFS points will take a long time to create since it has to do a synthetic full over the Internet. We have seen performance issues even on NAS storage accessed via LAN. Is anyone currently doing copies direct to object storage in version 12? If so, are you seeing any performance issues?
Thanks for the input.
-
- Chief Product Officer
- Posts: 31968
- Liked: 7438 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Copy jobs to Wasabi on version 12
Hello, you can basically forget your experience with file/block based repositories as it does not apply to object storage. This is really the best thing to do, otherwise it will be endless confusion trying to draw parallels that do not exist because of how different these storage technologies are.
For starters, object storage has no file system and thus no backup files. So there's no such thing as synthetic full backup file creation process in principle, which can be quite slow on NAS indeed due to requiring lots of IOPS from storage devices without block cloning (this is one of many reasons why we don't recommend NAS as a backup target).
With cloud object storage, your only concern should be the Internet bandwidth and whether it is enough to handle your data change rate. But it appears to be sufficient based on the fact that you're already using the Capacity Tier to copy the same exact backups to Wasabi. So I don't expect any issues with the switch you're planning, and it makes sense.
Thanks!
For starters, object storage has no file system and thus no backup files. So there's no such thing as synthetic full backup file creation process in principle, which can be quite slow on NAS indeed due to requiring lots of IOPS from storage devices without block cloning (this is one of many reasons why we don't recommend NAS as a backup target).
With cloud object storage, your only concern should be the Internet bandwidth and whether it is enough to handle your data change rate. But it appears to be sufficient based on the fact that you're already using the Capacity Tier to copy the same exact backups to Wasabi. So I don't expect any issues with the switch you're planning, and it makes sense.
Thanks!
-
- Enthusiast
- Posts: 29
- Liked: 1 time
- Joined: May 06, 2015 9:36 pm
- Location: USA
- Contact:
Re: Copy jobs to Wasabi on version 12
Hi Gostev,
Hope all is well with you. Thanks for that information. That is really helpful. Do you recommend creating a new bucket and starting over with the backup chain or can we just change the job properties and continue using the same bucket?
Hope all is well with you. Thanks for that information. That is really helpful. Do you recommend creating a new bucket and starting over with the backup chain or can we just change the job properties and continue using the same bucket?
-
- Chief Product Officer
- Posts: 31968
- Liked: 7438 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Copy jobs to Wasabi on version 12
It is safer to create a new bucket. I don't think mapping a backup copy job into Capacity Tier backups was ever tested, so the results are completely unpredictable.
-
- Enthusiast
- Posts: 29
- Liked: 1 time
- Joined: May 06, 2015 9:36 pm
- Location: USA
- Contact:
Re: Copy jobs to Wasabi on version 12
Can you explain how the process works regarding the creation of GFS points when using object storage? If it doesn't perform a synthetic backup, how does it generate the GFS point?
-
- Chief Product Officer
- Posts: 31968
- Liked: 7438 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Copy jobs to Wasabi on version 12
GFS full restore point creation is a metadata-only operation. In simple terms, we just store links to a bunch of existing objects which would normally constitute that "synthetic full backup file" on NAS, the one you've seen taking forever to be created because we need to physically copy all those data blocks into a new backup file. While with object storage, we just reference them.
-
- Enthusiast
- Posts: 29
- Liked: 1 time
- Joined: May 06, 2015 9:36 pm
- Location: USA
- Contact:
Re: Copy jobs to Wasabi on version 12
Are there any issues with sending large backups to Wasabi? One of our jobs has about 14TB of backups. Do you recommend breaking it into multiple jobs or will that size not be an issue? I understand the initial copy will take a long time, but that is ok. Just concerned about stability with the backup chain going forward.
-
- Chief Product Officer
- Posts: 31968
- Liked: 7438 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Copy jobs to Wasabi on version 12
14TB is a relatively small backup tbh.
But no, backup size does not matter until at least a few hundred TBs. Only then you typically need to start thinking about possible object storage scalability issues due to too many object in a single bucket. This is very storage dependent though, these days even the worst object storage out there will do at least 50TB per bucket before starting to experience significant issues, most can do a few hundred TBs, Amazon S3 could do around 1PB last we checked a year or two ago. No idea about Wasabi but would expect it to be closer to Amazon S3 on this spectrum because I have not really seen any corresponding complaints, ever.
But no, backup size does not matter until at least a few hundred TBs. Only then you typically need to start thinking about possible object storage scalability issues due to too many object in a single bucket. This is very storage dependent though, these days even the worst object storage out there will do at least 50TB per bucket before starting to experience significant issues, most can do a few hundred TBs, Amazon S3 could do around 1PB last we checked a year or two ago. No idea about Wasabi but would expect it to be closer to Amazon S3 on this spectrum because I have not really seen any corresponding complaints, ever.
-
- Veeam Software
- Posts: 187
- Liked: 29 times
- Joined: Apr 12, 2022 7:23 am
- Full Name: Christoph Weber
- Contact:
Re: Copy jobs to Wasabi on version 12
Hi
My additional question would be if the behavior is the same for backup copy jobs - Here is the scenrario:
- Backup to standard repo / SOBR without capacity tier
- Backup copy job to object storage repo
- Copy settings shown below:

Is my assumption correct that a copy job to object storage also creates GFS fulls in the way you described (metadata-only operation) and no additional / redundant data is sent&stored to the object storage except the changed blocks of the incremental?
What happens if "Read the entire restore point from source backup..." is activated. Does this setting behave like an "active full backup" and store all blocks once again in the bucket or does it still work with metadata-only operation in the background?
Thank you very much for your clarification
Best regards, Christoph
Thank you very much for this explanation. It helped me a lot as the helpcenter does not state this out in my mind.Gostev wrote: ↑Apr 19, 2023 4:44 pm GFS full restore point creation is a metadata-only operation. In simple terms, we just store links to a bunch of existing objects which would normally constitute that "synthetic full backup file" on NAS, the one you've seen taking forever to be created because we need to physically copy all those data blocks into a new backup file. While with object storage, we just reference them.
My additional question would be if the behavior is the same for backup copy jobs - Here is the scenrario:
- Backup to standard repo / SOBR without capacity tier
- Backup copy job to object storage repo
- Copy settings shown below:

Is my assumption correct that a copy job to object storage also creates GFS fulls in the way you described (metadata-only operation) and no additional / redundant data is sent&stored to the object storage except the changed blocks of the incremental?
What happens if "Read the entire restore point from source backup..." is activated. Does this setting behave like an "active full backup" and store all blocks once again in the bucket or does it still work with metadata-only operation in the background?
Thank you very much for your clarification
Best regards, Christoph
-
- Chief Product Officer
- Posts: 31968
- Liked: 7438 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Copy jobs to Wasabi on version 12
Correct on both counts.
-
- Service Provider
- Posts: 176
- Liked: 53 times
- Joined: Mar 11, 2016 7:41 pm
- Full Name: Cory Wallace
- Contact:
Re: Copy jobs to Wasabi on version 12
One thing to note if you are doing immutability - when doing GFS retention directly to an object storage repository, the GFS restore points are immutable for the entire duration of the restore point. For example, if you have 14 days of immutability set on your repository and you have GFS points retained for months or years, they will be immutable and undeletable until the expiration date. I have a feature request to allow this to be configurable, because I am not a fan of this behavior personally.
-
- Enthusiast
- Posts: 29
- Liked: 1 time
- Joined: May 06, 2015 9:36 pm
- Location: USA
- Contact:
Re: Copy jobs to Wasabi on version 12
Actually, (if this is correct) I like this feature. One thing that has concerned us is the possibility of a malicious actor deleting all our GFS points. Yeah, we have immutability set to protect our most recent backups, but we could lose all our archive backups if this wasn't the case. I would love to have some verification of this from a Veeam staff member.One thing to note if you are doing immutability - when doing GFS retention directly to an object storage repository, the GFS restore points are immutable for the entire duration of the restore point. For example, if you have 14 days of immutability set on your repository and you have GFS points retained for months or years, they will be immutable and undeletable until the expiration date. I have a feature request to allow this to be configurable, because I am not a fan of this behavior personally.
-
- Chief Product Officer
- Posts: 31968
- Liked: 7438 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Copy jobs to Wasabi on version 12
That is correct: GFS backups are made immutable for the entire duration of their retention policy. We cannot do the same for recent backups because for those we also support a restore-points-based retention policies (in addition to the default time-based). But with GFS backups, we always know it's deletion date/time right when creating one.
-
- Service Provider
- Posts: 176
- Liked: 53 times
- Joined: Mar 11, 2016 7:41 pm
- Full Name: Cory Wallace
- Contact:
Re: Copy jobs to Wasabi on version 12
I can certainly see why people would want that, can see the value in that, and would chose that myself in some scenarios given the option. I do however think it is dangerous to pigeonhole people into making 100% of your GFS retention points immutable for the entire length of retention. If we run out of space quicker than we thought (on-prem storage) or costs are getting high (off-prem storage), I would prefer the ability to delete those restore points if needed.
For most of my clients, having a long GFS retention is a nice to have, but really not mandatory, so the flexibility would be preferred over immutability in those circumstances (as long as recent data is immutable). Other clients that are under strict compliance requirements would feel different. That's why I think this should be configurable.
For most of my clients, having a long GFS retention is a nice to have, but really not mandatory, so the flexibility would be preferred over immutability in those circumstances (as long as recent data is immutable). Other clients that are under strict compliance requirements would feel different. That's why I think this should be configurable.
-
- Chief Product Officer
- Posts: 31968
- Liked: 7438 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Copy jobs to Wasabi on version 12
It's a good feedback.
Who is online
Users browsing this forum: No registered users and 86 guests