Dear,
I have a sizing computational matter. We currently have Windows ReFS on our Proxies. We would like to estimate how much Object storage we would require
based on following storage optimisations :
Compression set to : Optimal
Deduplication set to local target (which is ReFS on Windows)
I know the ReFS FS does NOT support compression but it is in support of deduplication.
So I will need to have an rough estimate based on my data , how much TB , of my Local Target (ReFS) would translate to S3 object storage once I go to this type of storage.
I'm sure the cap in TB required is going to explode, the question is by how much ?
S3 Object Storage to my understanding is not suited for deduplication, but does compression (so I've seen on Cloudian)
Q1)
I'm puzzled on the backup scaleout backup respositories about the columns Capacity,Free,Usage. The useage is not the difference between Capacity and Free.
to give an exemple on only one Proxy Target. I have
Cap Free Usage
58,2TB 34.8TB 51,8TB
What is the meaning of Usage ? Is it the rehydrated space without deduplication (as this is the only feature ReFS supports) ?
Q2)
Does anyone have a powershell get-cmdlet to actually export data from the scale-out repositories as I didn't came accross it but
it might be available in the 'wild' somewhere as I didn't came accross that one in v11a.
-
- Novice
- Posts: 8
- Liked: never
- Joined: Oct 05, 2022 7:37 am
- Full Name: Stefan Timmermans
- Contact:
-
- Product Manager
- Posts: 14818
- Liked: 3074 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Extrapolation
Hello,
space usage on REFS vs Object storage is about the same as long as you keep the "local target" (1MB) block size setting.
Q1: It's the rehydrated space, yes. We cannot calculate the real usage on REFS / XFS.
Q2: V12 can export to "other repositories"
Best regards,
Hannes
space usage on REFS vs Object storage is about the same as long as you keep the "local target" (1MB) block size setting.
Q1: It's the rehydrated space, yes. We cannot calculate the real usage on REFS / XFS.
Q2: V12 can export to "other repositories"
Best regards,
Hannes
-
- Novice
- Posts: 8
- Liked: never
- Joined: Oct 05, 2022 7:37 am
- Full Name: Stefan Timmermans
- Contact:
Re: Extrapolation
Hannes,
I do understand that the filesystem is not taken into account in the Veeam figures, so far understandeable.
What was actually my question is :
A We use datacompression and deduplication at the Veeam Job level, so the "usage" is Veeams size AFTER compression and deduplication
or is it
B the amount of data processed from the source (VMFS/vSphere infra) and thus BEFORE Veeams compression/deduplication ?
I presume its A
I do understand that the filesystem is not taken into account in the Veeam figures, so far understandeable.
What was actually my question is :
A We use datacompression and deduplication at the Veeam Job level, so the "usage" is Veeams size AFTER compression and deduplication
or is it
B the amount of data processed from the source (VMFS/vSphere infra) and thus BEFORE Veeams compression/deduplication ?
I presume its A
-
- Product Manager
- Posts: 14818
- Liked: 3074 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: Extrapolation
Hello,
yes, A, but spaceless / fastclone fulls are counted as fulls.
Best regards,
Hannes
yes, A, but spaceless / fastclone fulls are counted as fulls.
Best regards,
Hannes
Who is online
Users browsing this forum: No registered users and 17 guests