Thank you you both, Fabian and Luke, for your replies!
I only read the installation guide and not the best practice guide, which of course should be the first place I should have checked. My bad!
Mildur wrote:
Data Migration between object storage repositories is not supported. This won't have an affect on the chosen design.
May I ask, what sort of data migration are you looking for on the same repository? What data do you expect to move in the future from one repository to another? Normally our customer are requesting us to deliver a feature to migrate data from one object storage provider to another. Or from one bucket to a bucket in another account.
I am talking about if I were to change the proxy server of a backup job, which in turn would also change the repository used for that job. I see now that
Move-VBOEntityData does not support moving data between object storage, as you also point out. So, if we need to change proxy/repository, the only option is to have a new full backup run using the new object storage repository, correct? Are there any benefits in this scenario if the old and new repository use the same bucket?
JustBackupSomething wrote:
Issues will however arise when you don't have adequate hardware for the job. If you do a MinIO cluster deployment, then follow their best practices if you can (and yes, using anything other than SSDs is not following their BPs).
We will deploy a single-node single-drive setup, as we only have available hardware for this. The compute resources will be more than what MinIO recommends, but the storage will unfortunately be HDD's. I say "single-drive", but the the storage will be backed by a HDD SAN over 25Gbps iSCSI, so hopefully it will be adequate. When we will refresh our backup infrastructure in a couple of years, I will be able to design it with MinIO best practices in mind, this is just to get us up running with object storage in the mean time.
JustBackupSomething wrote:
When restoring large sets of data you should (someone please correct if wrong on my understanding) get better performance if you split jobs into one bucket per job. This is due to the way Veeam reads / writes data from the S3 repository. You may not notice this at a small repository level but you should see improvements in search speed when splitting out the jobs.
Due to the size of our company (25 000 users), we have a large number of jobs to limit the object count for each job. We only do small restore jobs, and never any large scale restore. Creating one bucket per job would mean a lot more management load on the backup admin (i.e. me

), so hopefully we can get away with one bucket per repository.
JustBackupSomething wrote:
To the S3 Object Storage Migration side,
1. Disable any changes to s3, delete it from M365 if you can to prevent veeam from running any retention jobs in the background.
2. move data
3. remap jobs to new location
Theoretically this should result in a like for like migration but mileage may vary. Make sure you do restore tests and validate there is no inconsistencies between data before putting back in prod.
As mentioned above, when I said "migration" I meant moving backup objects between proxies and repositories. I have done a migration between JET-based repositories before, following similar steps that you outline, with success. So, it's good to hear that a similar approach should be possible with S3 storage.
Again, thank you both for your time and input. This is very valuable to me and is of great help to ensure that I do it right from the start.