- Posts: 2
- Liked: never
- Joined: Mar 13, 2023 2:06 pm
- Full Name: Thor
1. Add the NFS share in jobs/backups and attempt to manually select the files to be copied from there.
2. SOBR configuration, have the bucket as part of a retention tier. (This doesn't seem viable as the only way I would be able to select the files is via some conditional options, ie: over certain date etc.
3. Export the content as virtual disks, setting the target as the google cloud bucket, (not sure if this is even possible.)
- Product Manager
- Posts: 7591
- Liked: 1974 times
- Joined: May 13, 2017 4:51 pm
- Full Name: Fabian K.
- Location: Switzerland
Choosing only yearly backups by exporting single backup files to object storage would lead to a lot of duplicated space. Each Export will be seen as a new Full Backup and therefore will use the entire space of a full backup.
As an alternative you could use Copy Backup Chain to Object Storage. That would copy the entire backup chain and it would be a block clone aware operation. Only unique blocks would be offloaded. That way you save costs on API Calls and Storage.
Please note, our product doesn't support immutability for Google Cloud storage. If you require immutable backups, check out other cloud services which provide support for immutable object storage. I also recommend to compare AWS/Azure with other providers which don't bill you for API calls. For example Wasabi, but there are a lot of other providers which have a similar pricing scheme.
- Veeam Ready Database: https://www.veeam.com/alliance-partner- ... tml?page=1
- Unofficial object storage compatibility list: object-storage-f52/unoffizial-compatibi ... 56956.html
Users browsing this forum: Semrush [Bot] and 1 guest