Hello,
we have a customer with Veeam Backup & replication and we use it to backup his VMs locally on a nas.
Recently we started investigating to integrate GCP (in particular Google Cloud Storage) as a remote offline repository to copy the nas backups.
Actually we have a Scale Out Backup Repository, where we integrated Google Storage as Capacity tier.
In the Capacity Tier configuration we activated the flag "Copy backups to object storage as soon as they are created".
Now when the backup job create daily new backup files to the NAS, they are copied to our Google bucket.
1) One first mechanism, we need to better understand is due to the fact that actually on the NAS we have (for each daily VM backup) few big files, but if we go in Google Storage we have a completly different structure of directories and files with a lot of smaller files. Is this correct? Does Veeam perform some kind of "transformation" during the move to Object Storage?
2) The second question is about uploading a very big quantity of data. Consider for example we have 30Tb of data in the customer backup. If we want to integrate Google Cloud Storage as I described earlier we need to upload by Veeam these 30Tb of data.... and it surely will take long time.
As per your experience, is there a way to initialize in some way the Google repository in a faster way? I read about for example Google Transfer Appliance, but it seems to work if you simply "copy" files to the appliance exactly as they are locally... in our case, if our suppose at point 1 is correct, we don't have locally the right structure of files and directories to be copied to Google Cloud as they are "transformed" by Veeam during the upload process... is this correct?
Is there another way to accomplish such an initialization with a lot of data in an optimized way?
I hope I was sufficently clear.
Kind regards,
Matt
-
- Novice
- Posts: 3
- Liked: never
- Joined: Apr 22, 2022 2:45 pm
- Contact:
-
- Product Manager
- Posts: 14844
- Liked: 3086 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Moving to object storage a lot of data
Hello,
and welcome to the forums.
1) Yes, that's correct. The user guide has more details.
2) Preliminary answer: I would go for direct upload. There is support for Azure Data box and AWS Snowball Edge (the equivalents for Google Transfer Appliance) built-in. But not for Google Transfer Appliance. That does not mean it's impossible. It means, that it might be possible with be some manual work. I will ask my colleagues whether there is maybe something available and come back on this.
Best regards,
Hannes
and welcome to the forums.
1) Yes, that's correct. The user guide has more details.
2) Preliminary answer: I would go for direct upload. There is support for Azure Data box and AWS Snowball Edge (the equivalents for Google Transfer Appliance) built-in. But not for Google Transfer Appliance. That does not mean it's impossible. It means, that it might be possible with be some manual work. I will ask my colleagues whether there is maybe something available and come back on this.
Best regards,
Hannes
-
- Product Manager
- Posts: 20415
- Liked: 2302 times
- Joined: Oct 26, 2012 3:28 pm
- Full Name: Vladimir Eremin
- Contact:
Re: Moving to object storage a lot of data
Last time we checked Google Transfer Appliance supported only NFS and it was not possible to expose it over API, so I seriously doubt that it's possible to connect the appliance and use it as a seeding box. Thanks!
Who is online
Users browsing this forum: No registered users and 10 guests