-
- Service Provider
- Posts: 218
- Liked: 28 times
- Joined: Jan 24, 2012 7:56 am
- Full Name: Massimiliano Rizzi
- Contact:
Kind of exotic way of seeding Backups to Wasabi. Will it work ?
Hello Community,
let me elaborate more on this . First of all, apologies for the long thread but I wanted to provide as much information as possible.
As we are seeing a strong demand coming from our Veeam customers willing to use a Cloud storage solution for their backup archives, we recently enrolled in the Wasabi MSP Program (after using Veeam Cloud Connect for years).
The biggest challenge they we are currently facing is when the initial upload to the Wasabi S3 bucket consists of several Terabytes. Even though the “upload window” and the bandwidth allotted to Veeam during the night can cope with the average size of the daily incremental uploads, many customers will struggle with an initial upload to the Wasabi S3 bucket consisting of several Terabytes.
We recently contacted Wasabi support in order to have more information about the Wasabi Ball and unfortunately we were told that the Wasabi Ball is only available in the USA (it seems like they expect to have the Wasabi Ball available in Europe in Q4/22). Most importantly, the use of Wasabi Ball with Object lock is not supported (https://wasabi-support.zendesk.com/hc/e ... ject-Lock-) and Immutable backups are what both we and our customers are looking to protect backups against attacks these days.
As a result, I am trying to think outside the box and check to see whether a workaround exists in order to perform the initial upload to the Wasabi S3 bucket using our Gigabit Fiber Optic Internet without putting stress on the customer's network in problematic scenarios.
So far I have come up with the following solution and I would like to confirm whether what I’m planning makes sense in order to go ahead with a pilot customer. The steps below might sound complex and time consuming but I believe they are much easier done than said/described:
==================================================
1. We ship to the customer a properly sized helper Microsoft Windows-based physical server that will be used as a temporary helper backup repository
2. We add the helper server to the customer's Veeam Backup & Replication server (using its name) and then add it as helper Microsoft Windows server backup repository backed by Direct attached storage
3. We add a helper SOBR (I will refer to it as SOBR2 from now on) to the customer's Veeam Backup & Replication server and then choose the helper repository we just created in step 2 above as the Performance Tier
4. We add the Wasabi S3 bucket as an S3 Compatible object storage to the customer's backup infrastructure. While doing this, we select the "Use the following gateway server check box" in order to choose the helper server added in step 2 above as the gateway server towards the Wasabi S3 bucket.
5. We add the Wasabi S3 bucket created in step 4 to SOBR2 as the Capacity Tier extent. While doing this, we configure the time window accordingly in order to initially prohibit any initial copy of data the Capacity Tier.
6. We clone an existing Backup or Backup Copy Job (depending on the needs) on the customer's Veeam Backup & Replication server in order to create a helper Job with the same selections . This helper job will target SOBR2.
7. We let the helper job targeting SOBR2 run once and create the backup chain in the Performance Tier added in step 3
8. We ship the helper Microsoft Windows-based physical server back to us.
9. We create a helper VLAN behind our Gigabit Fiber Optic Internet and connect it to the customer's backup infrastructure (for example using Veeam PN)
10. We change the IP address of the helper box on the customer's Veeam Backup & Replication server and make sure that name resolution is working in order for the customer's Veeam Backup & Replication server to reach the helper server on our side of the Veeam PN tunnel.
11. We modify the time window settings on SOBR2 in order to allow the copy of data to the Capacity Tier using our Gigabit Fiber Optic Internet
12. Once the copy of data to the Capacity Tier has completed successfully, we modify the settings of the Wasabi S3 bucket created in step 4 in order not to choose the helper server added in step 2 above as the gateway server towards the Wasabi S3 bucket by unselecting the "Use the following gateway server check box"
13. We delete the helper job targeting SOBR2, SOBR2 itself, the simple helper backup repository, the helper Microsoft Windows server created in step 2 above as well as the Veeam PN connection (basically, we cleanup all the temporary configurations)
14. We go back to the customer's Veeam Backup & Replication server in order to add a new SOBR (I will refer to it as SOBR1 from now on). We then choose the simple repository used as the target for the existing Backup or Backup Copy Job we created a cloned from in step 6 above as the Performance Tier. This Job will be reconfigured in order to target SOBR1
15. We add the Wasabi S3 bucket (already populated with data) created in step 4 to SOBR1 as the Capacity Tier extent and then we configure the time window as well as the other Capacity Tier settings as needed
==================================================
After performing the above steps, I expect Veeam to be able to reuse the objects already stored on the bucket without the need of uploading them again.
I would really appreciate it if you could kindly spend some of your time to confirm whether my plan makes sense.
Wish you a great rest of the day ahead.
Thanks!
Massimiliano
let me elaborate more on this . First of all, apologies for the long thread but I wanted to provide as much information as possible.
As we are seeing a strong demand coming from our Veeam customers willing to use a Cloud storage solution for their backup archives, we recently enrolled in the Wasabi MSP Program (after using Veeam Cloud Connect for years).
The biggest challenge they we are currently facing is when the initial upload to the Wasabi S3 bucket consists of several Terabytes. Even though the “upload window” and the bandwidth allotted to Veeam during the night can cope with the average size of the daily incremental uploads, many customers will struggle with an initial upload to the Wasabi S3 bucket consisting of several Terabytes.
We recently contacted Wasabi support in order to have more information about the Wasabi Ball and unfortunately we were told that the Wasabi Ball is only available in the USA (it seems like they expect to have the Wasabi Ball available in Europe in Q4/22). Most importantly, the use of Wasabi Ball with Object lock is not supported (https://wasabi-support.zendesk.com/hc/e ... ject-Lock-) and Immutable backups are what both we and our customers are looking to protect backups against attacks these days.
As a result, I am trying to think outside the box and check to see whether a workaround exists in order to perform the initial upload to the Wasabi S3 bucket using our Gigabit Fiber Optic Internet without putting stress on the customer's network in problematic scenarios.
So far I have come up with the following solution and I would like to confirm whether what I’m planning makes sense in order to go ahead with a pilot customer. The steps below might sound complex and time consuming but I believe they are much easier done than said/described:
==================================================
1. We ship to the customer a properly sized helper Microsoft Windows-based physical server that will be used as a temporary helper backup repository
2. We add the helper server to the customer's Veeam Backup & Replication server (using its name) and then add it as helper Microsoft Windows server backup repository backed by Direct attached storage
3. We add a helper SOBR (I will refer to it as SOBR2 from now on) to the customer's Veeam Backup & Replication server and then choose the helper repository we just created in step 2 above as the Performance Tier
4. We add the Wasabi S3 bucket as an S3 Compatible object storage to the customer's backup infrastructure. While doing this, we select the "Use the following gateway server check box" in order to choose the helper server added in step 2 above as the gateway server towards the Wasabi S3 bucket.
5. We add the Wasabi S3 bucket created in step 4 to SOBR2 as the Capacity Tier extent. While doing this, we configure the time window accordingly in order to initially prohibit any initial copy of data the Capacity Tier.
6. We clone an existing Backup or Backup Copy Job (depending on the needs) on the customer's Veeam Backup & Replication server in order to create a helper Job with the same selections . This helper job will target SOBR2.
7. We let the helper job targeting SOBR2 run once and create the backup chain in the Performance Tier added in step 3
8. We ship the helper Microsoft Windows-based physical server back to us.
9. We create a helper VLAN behind our Gigabit Fiber Optic Internet and connect it to the customer's backup infrastructure (for example using Veeam PN)
10. We change the IP address of the helper box on the customer's Veeam Backup & Replication server and make sure that name resolution is working in order for the customer's Veeam Backup & Replication server to reach the helper server on our side of the Veeam PN tunnel.
11. We modify the time window settings on SOBR2 in order to allow the copy of data to the Capacity Tier using our Gigabit Fiber Optic Internet
12. Once the copy of data to the Capacity Tier has completed successfully, we modify the settings of the Wasabi S3 bucket created in step 4 in order not to choose the helper server added in step 2 above as the gateway server towards the Wasabi S3 bucket by unselecting the "Use the following gateway server check box"
13. We delete the helper job targeting SOBR2, SOBR2 itself, the simple helper backup repository, the helper Microsoft Windows server created in step 2 above as well as the Veeam PN connection (basically, we cleanup all the temporary configurations)
14. We go back to the customer's Veeam Backup & Replication server in order to add a new SOBR (I will refer to it as SOBR1 from now on). We then choose the simple repository used as the target for the existing Backup or Backup Copy Job we created a cloned from in step 6 above as the Performance Tier. This Job will be reconfigured in order to target SOBR1
15. We add the Wasabi S3 bucket (already populated with data) created in step 4 to SOBR1 as the Capacity Tier extent and then we configure the time window as well as the other Capacity Tier settings as needed
==================================================
After performing the above steps, I expect Veeam to be able to reuse the objects already stored on the bucket without the need of uploading them again.
I would really appreciate it if you could kindly spend some of your time to confirm whether my plan makes sense.
Wish you a great rest of the day ahead.
Thanks!
Massimiliano
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Kind of exotic way of seeding Backups to Wasabi. Will it work ?
Hello,
I did not test it, but it should work.
Best regards,
Hannes
I did not test it, but it should work.
Best regards,
Hannes
-
- Service Provider
- Posts: 218
- Liked: 28 times
- Joined: Jan 24, 2012 7:56 am
- Full Name: Massimiliano Rizzi
- Contact:
Re: Kind of exotic way of seeding Backups to Wasabi. Will it work ?
Hi there Hannes,Hello,
I did not test it, but it should work.
first of all thank you for your patience patience with this long thread. It is very much appreciated.
This is the information I was trying to find out as this is a "workaround" that we would like to perform ad, as a result, I did not expect the procedure to be documented. My main concern here was checking whether I might have missed some important steps or preparations which could result in a "show-stopper" before even doing some tests.
As it appears it could work even though it has not been tested, we will go ahead and do some tests using smaller data prior to starting the main process.
Thanks!
Massimiliano
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Kind of exotic way of seeding Backups to Wasabi. Will it work ?
Hello,
let the community know, how it goes
Best regards,
Hannes
let the community know, how it goes
Best regards,
Hannes
-
- Service Provider
- Posts: 218
- Liked: 28 times
- Joined: Jan 24, 2012 7:56 am
- Full Name: Massimiliano Rizzi
- Contact:
Re: Kind of exotic way of seeding Backups to Wasabi. Will it work ?
Sure !Hello,
let the community know, how it goes
I should be able to provide an update on this the beginning of next week.
Kind Regards,
Massimiliano
-
- Lurker
- Posts: 1
- Liked: never
- Joined: Jul 11, 2022 8:35 pm
- Full Name: Stephen Jenkins
- Contact:
Re: Kind of exotic way of seeding Backups to Wasabi. Will it work ?
Greetings,
This is a great idea. I'm in a similar situation and am very interested in hearing back if this worked for you.
Thanks
This is a great idea. I'm in a similar situation and am very interested in hearing back if this worked for you.
Thanks
-
- Service Provider
- Posts: 218
- Liked: 28 times
- Joined: Jan 24, 2012 7:56 am
- Full Name: Massimiliano Rizzi
- Contact:
Re: Kind of exotic way of seeding Backups to Wasabi. Will it work ?
Hello Community,
apologies for my belated response as I had a very busy couple of days.
I should be able to provide an update on this the beginning of next week.
Wish you a great weekend ahead.
Thanks!
Massimiliano
apologies for my belated response as I had a very busy couple of days.
I should be able to provide an update on this the beginning of next week.
Wish you a great weekend ahead.
Thanks!
Massimiliano
-
- Service Provider
- Posts: 218
- Liked: 28 times
- Joined: Jan 24, 2012 7:56 am
- Full Name: Massimiliano Rizzi
- Contact:
Re: Kind of exotic way of seeding Backups to Wasabi. Will it work ?
Hello Community,
just a quick update here as we have just completed step 11 above over the course of the last weekend. In fact, the initial copy of data from the helper SOBR to the Capacity Tier using our Gigabit Fiber Optic Internet worked flawlessly.
We also performed the remaining steps and we are now waiting for the permitted time window for the job at the end of the day in order to start offloading the most recent restore points from the customer's site.
Right after adding the Wasabi S3 bucket (already populated with data) to the customer's SOBR as the Capacity Tier extent, the Configuration Database Resynchronize/S3 repository rescan job below was automatically started:
Out of curiosity, is the purpose of this job to download backup data metadata from every backup file already present in the S3 repository ?
Just curious/impatient to see whether our plan works out while waiting for the permitted time window for the offload job from the customer's site .
Thanks!
Massimiliano
just a quick update here as we have just completed step 11 above over the course of the last weekend. In fact, the initial copy of data from the helper SOBR to the Capacity Tier using our Gigabit Fiber Optic Internet worked flawlessly.
We also performed the remaining steps and we are now waiting for the permitted time window for the job at the end of the day in order to start offloading the most recent restore points from the customer's site.
Right after adding the Wasabi S3 bucket (already populated with data) to the customer's SOBR as the Capacity Tier extent, the Configuration Database Resynchronize/S3 repository rescan job below was automatically started:
Out of curiosity, is the purpose of this job to download backup data metadata from every backup file already present in the S3 repository ?
Just curious/impatient to see whether our plan works out while waiting for the permitted time window for the offload job from the customer's site .
Thanks!
Massimiliano
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Kind of exotic way of seeding Backups to Wasabi. Will it work ?
Hello,
that synchronization can take some time, yes (how much data did you use in your test for the 20min import time)?
In V12 we optimized that. So if you have the beta (available via your local Veeam representative), you should be able to see improvements.
Best regards,
Hannes
that synchronization can take some time, yes (how much data did you use in your test for the 20min import time)?
In V12 we optimized that. So if you have the beta (available via your local Veeam representative), you should be able to see improvements.
Best regards,
Hannes
-
- Service Provider
- Posts: 218
- Liked: 28 times
- Joined: Jan 24, 2012 7:56 am
- Full Name: Massimiliano Rizzi
- Contact:
Re: Kind of exotic way of seeding Backups to Wasabi. Will it work ?
Hello Hannes,
thank you for your reply.
The amount of backup data is above 4TB, so using our Gigabit Fiber Optic Internet seemed to be the right thing to do to accomplish the request from the customer in very little time.
Is there a way to check whether Veeam to is able to reuse the objects already stored on the bucket (instead of uploading them again) once the permitted time window for the job kicks in at the end of the day ?
Thanks again!
Massimiliano
thank you for your reply.
The customer for which we come up with the action plan above pushed us to have at least one immutable backup offsite in the cloud prior to entering vacation mode this week, so we pulled the trigger and eventually decided to test the procedure above – while informing him the procedure is experimental.that synchronization can take some time, yes (how much data did you use in your test for the 20min import time)?
The amount of backup data is above 4TB, so using our Gigabit Fiber Optic Internet seemed to be the right thing to do to accomplish the request from the customer in very little time.
Is there a way to check whether Veeam to is able to reuse the objects already stored on the bucket (instead of uploading them again) once the permitted time window for the job kicks in at the end of the day ?
We are currently using the most recent V11 build.In V12 we optimized that. So if you have the beta (available via your local Veeam representative), you should be able to see improvements.
Thanks again!
Massimiliano
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Kind of exotic way of seeding Backups to Wasabi. Will it work ?
ok, 4TB in 20min for synchronization is in the expected range.
offload to capacity tier is always incremental forever. If the synchronization worked, then there is no way of "active full" and uploading everything again. You should see, that the amount of data is similar like the days before.
yes, the beta was just as a side note (assuming that you are not happy with the synchronization time). Not for production
offload to capacity tier is always incremental forever. If the synchronization worked, then there is no way of "active full" and uploading everything again. You should see, that the amount of data is similar like the days before.
yes, the beta was just as a side note (assuming that you are not happy with the synchronization time). Not for production
-
- Service Provider
- Posts: 218
- Liked: 28 times
- Joined: Jan 24, 2012 7:56 am
- Full Name: Massimiliano Rizzi
- Contact:
Re: Kind of exotic way of seeding Backups to Wasabi. Will it work ?
Hello Hannes,offload to capacity tier is always incremental forever. If the synchronization worked, then there is no way of "active full" and uploading everything again. You should see, that the amount of data is similar like the days before.
thank you for your reply.
I only need an additional clarification here prior to modifying the permitted time window in order to let the Offload task do its job from the customer's site as I would like to double-check whether my understanding is correct.
The existing restore points already present in the bucket have been copied to the Capacity Tier after they landed on the helper SOBR Performance Tier as a result of running the helper jobs we created (step 6 above). These helper jobs have been cloned from the production jobs.
The current situation is depicted below:
1. On the Home view, in the inventory pane, Backups > Disk shows the backup chains that are present in the Performance Tier on the customer's site
2. On the Home view, in the inventory pane, Backups > Object Storage (Imported) shows the backup chains that have been offloaded from the helper SOBR Performance Tier (using our Gigabit Fiber Optic Internet) after the synchronization task did its job. These restore points refer to the same VMs, while belonging to the cloned jobs
Now the million dollar question I am afraid to ask, should we expect Veeam to be able to reuse the objects already stored on the bucket without the need of uploading them again even though the restore points (while referring to the same VMs) belong to the cloned jobs ?
Apologies for asking for more information, it is very important for me to have a clear understanding of how things work.
Thanks a lot for your patience!
Massimiliano
-
- Product Manager
- Posts: 20415
- Liked: 2302 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Kind of exotic way of seeding Backups to Wasabi. Will it work ?
If these restore points are created by different jobs and represent different backup chains (despite of their actual content), the data will not be re-used during offload process.
Who is online
Users browsing this forum: No registered users and 7 guests