We are the process of upgrading all our customers from v7 to v8, and we recently finished one where we have about 240 TB of data divided over 4 objects repositories, and I wanted to share the upgrade times.
We took them one by one, and it seems it is the number of objects that defines the upgrade times:
Bucket 1 med 63 million objects, 85 TB data, 16 hours for upgrade, 5.3 TB pr/hour or approx 4 million objects pr/hour.
Bucket 2 med 8.4 million objects, 35 TB data, 2 hours and 15 min for upgrade, 15,5 TB pr/hour or appros 3,7 million objects pr/hour.
Bucket 3 med 13,2 million objects, 56 TB data, 3,5 hours for upgrade, 16 TB pr/hour or approx 3,8 million objects pr/hour.
Bucket 4 med 12,1 million objects, 60 TB data, 5 hours for upgrade, 12 TB pr/hour or approx 2,4 million objects pr/hour.
Not sure why the last bucket was slower than the other in processing objects.
Thanks for sharing — this is valuable not only to other users but also to us at Veeam.
We are now completing another set of upgrade tests, and I hope to make the updated numbers publicly available soon to help with upgrade planning. So far, we know that for object storage repositories, the repository cache size is what matters. The larger the cache, the longer the upgrade will take. The backup data itself is not touched during the upgrade, so technically it does not really matter how many sites, mailboxes and other objects there are in a repository.
If you share your cache sizes for each of the aforementioned repositories, I’d be very interested to see them. Am I correct that the times you shared do not include repository indexing? Are you using a separate PostgreSQL instance?
what Storage is behind the S3 Buckets?
i'm right before upgrading my v7 to v8... having 2x 230TB (each 30GB of Cache) 75TB (15GB Cache), 30TB, Bucket and many smaller ones around 10TB ...
First, I suggest you hold off on upgrading until we deliver the next product update.
Next, to estimate upgrade times, you should look at the repository cache size vs the amount of data in a repository - backups are not touched during the upgrade, while a cache is updated and transferred to PostgreSQL.
In the latest tests, we saw that upgrades of S3 Compatible repositories can be much slower than Azure or AWS.
Polina wrote: ↑Mar 11, 2025 10:53 am
Next, to estimate upgrade times, you should look at the repository cache size vs the amount of data in a repository - backups are not touched during the upgrade, while a cache is updated and transferred to PostgreSQL.
In the latest tests, we saw that upgrades of S3 Compatible repositories can be much slower than Azure or AWS.
how can the difference between s3 compatible and azure/aws be relevant if the these buckets are not touched during update? or is it just read but no write what you mean by "not touched" ? even then i would assume local LAN should be faster then something in the "internet" with some latency attached to it...
is it possible to test/dry-run such a metadata/cache migration?? ie. cloning the vb365 server and upgrade this?? i guess it needs to connect to the S3 buckets? or would that maybe even work without connection to the s3??