Comprehensive data protection for all workloads
Post Reply
schewee
Influencer
Posts: 14
Liked: never
Joined: May 25, 2017 2:19 am
Full Name: Eric Schewe
Contact:

Repository level encryption

Post by schewee »

I've configured a Veeam Repository with repository level encryption and I've configured a few standalone Windows Veeam Agents to send their backups to it.

Does this mean the vbm, vbk, vib files in this repository are now encrypted at rest and if I were to use rclone to replicate them somewhere else they would still be encrypted at the destination?
HannesK
Product Manager
Posts: 14839
Liked: 3085 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Repository level encryption

Post by HannesK »

Hello,
yes for standalone agents - you can test that by copying the file to a different VBR server and try to open it. It will ask you for a password.

For managed agents: the settings in the job configuration are managing the encryption

Best regards,
Hannes

PS: I'm interested in your experience with rclone after some weeks. I assume that it will be extremely slow / bandwidth intensive as soon as you run into merges
schewee
Influencer
Posts: 14
Liked: never
Joined: May 25, 2017 2:19 am
Full Name: Eric Schewe
Contact:

Re: Repository level encryption

Post by schewee »

Fantastic!

I can actually already offer you some info on rclone since I've got a mix of VM jobs (encryption enabled with a passphrase per job) and Agent jobs (previously per-agent passphrases to a CIFS share, now using repository level encryption so I can take advantage of ReFS and faster synthetics).

This setup is for my Homelab and I am lucky enough to have 1GB fibre internet and no data cap.

My backups are: ~14 VM jobs with encryption enabled per job going to a non-encrypted repository and 6 agent jobs going to an encrypted repository.

From my VMs, two are the big ones, I have a Plex VM with my Bluray/HDDVD archive ripped to it (2.5TB) and my general purpose Windows file server (1.5 TB). All other VMs are 200GB or smaller.

From my Agent based backups, my gaming desktop is the largest at ~1TB and everything else is also 200GB or smaller.

All jobs are configured for 7 or 14 restore points and synthetic fulls occur on the weekend (Saturday or Sunday).

Once everything is on my Veeam box I am using rclone and a Gsuite Business Account with 5 accounts (1 admin, 4 for backup storage) so I qualify for unlimited GDrive storage.

I've created rclone jobs that basically replicate Plex to account 1, file server to account 2, all other VMs to account 3 and standalone agents to account 4.

Google Drive allows for unlimited storage so there is no problem with file sizes but there is a 750GB/day/user limitation. That limitation is only applied to new uploads after it's been exceeded. I run the sync's on a staggered scheduled where accounts 1 and 3 upload on Mon, Wed, Fri, Sun, etc. and accounts 2 and 4 upload on Tues, Thurs, Sat, Mon, etc.

Whenever a synthetic full occurs I have to re-upload a complete copy.

The 750GB/day/user limit is never exceeded by incremental data it will get uploaded first and complete well before a new synthetic full (if it's the weekend) has gotten close to the 750GB limit. Each job uploads 4 files at a time and no single file upload has ever surpassed 25mb/sec ((250mbit/sec) which appears to be a GDrive limitation). Oddly four file uploads running at once never go over 75mb/sec (750mbit/sec) so I still have 250mbit left for other things on my network and don't notice any impact if I happen to be awake during a upload.

My unlimited bandwidth internet plan and the Gsuite Business accounts are significantly cheaper than the equivalent S3 compatible storage I'd need to pay for if I were to do this properly which is why I chose to go this route. Even Backblaze B2, if Veeam supported it, wouldn't be cheaper than this setup, it would just save me bandwidth because I assume Veeam with supported S3 storage doesn't re-upload synthetic fulls when they are created.

Let me know if I missed anything you were looking for.
HannesK
Product Manager
Posts: 14839
Liked: 3085 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Repository level encryption

Post by HannesK »

Whenever a synthetic full occurs I have to re-upload a complete copy.
that's what I was looking for.

I'm not that good with prices. What I see on the forums is that Wasabi seems to be the cheapest for S3 (similar to Backblaze, which we do not support because it is not S3 compatible).
lucky enough to have 1GB fibre internet and no data cap.
nice for a homelab :-)
schewee
Influencer
Posts: 14
Liked: never
Joined: May 25, 2017 2:19 am
Full Name: Eric Schewe
Contact:

Re: Repository level encryption

Post by schewee »

Yeah, I checked Wasabi. I'm in Canada so their $95USD/mo for 16TB ($123.93CAD) +2-4% for foreign currency exchange is roughly double the cost of unlimited GSuite.
Post Reply

Who is online

Users browsing this forum: Baidu [Spider], Semrush [Bot] and 265 guests