Comprehensive data protection for all workloads
Post Reply
jcofin13
Service Provider
Posts: 200
Liked: 22 times
Joined: Feb 01, 2016 10:09 pm
Contact:

trying to make sense of storage usage

Post by jcofin13 »

I have a vbr server and VMware7.

This might be a stupid questoin but......How do i know how much storage i am using?

if look on my my linux repo directly i show i have 2 drives
130
120
for a total of 250tb in my sobr

Now if i go to each one of my jobs properties and look at the virtual machines size it calculates, it shows 135tb.
If i right click the "job" objects under "backups-->disk" it shows me 2 different sizes.
1. An "Total size" of 155tb
2. A "backup size" of 220tb.

IM assuming the total size is the size of the vms with a backup and all of the restore points after dedup and compression and Backup Size is maybe the size of all the vms and all the restore points without it?
i dont understand how i can have 250tb of storage yet a simple df -h on my disks show i have 104tb free space across all my sobr extents. That would mean my backups take up 150tb which is more close to the Total Size then the Total Backup Size. Is that accurate? Total backup size adds up to over 220tb for all my jobs and i would expect to see less than 30tb free on my repo but like i said..it shows around 100tb.

All im trying to sort out is how big all my backup data is so i can budget for new hardware to replace it. I would think a df -h would be the answer but i see all these other numbers and its confusing.
Mildur
Product Manager
Posts: 10642
Liked: 2867 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: trying to make sense of storage usage

Post by Mildur »

Hello

Sounds like the typical FastClone situation.
On reFS or XFS, synthetic full backups do not use the entire space. Identical blocks between different full backup filed and incremental files are only stored once on the physical disk. For XFS the technology is called reflink. It‘s part of the XFS filesystem and we are able to use it for your backups.

The backup console displays the cumulative size of all backup files reported by the filesystem. The backup server has no idea about the physical size used by each backup file.

You can read more about it in this KB: https://www.veeam.com/kb2996

df -h is the most accurate option, because it shows you the current physical disk space used with FastClone. Our Move Backup option will move backups FastClone aware (if your new storage supports FastClone as well) and will require approximately the same physical storage on the target as your old repository.

To size your new storage, I also recommend to use our calculator:
- https://www.veeam.com/calculators/simple/vbr/machines

Best,
Fabian
Product Management Analyst @ Veeam Software
jcofin13
Service Provider
Posts: 200
Liked: 22 times
Joined: Feb 01, 2016 10:09 pm
Contact:

Re: trying to make sense of storage usage

Post by jcofin13 »

Could be although my main storage doesnt show its more than is available.

I was mainly looking at Backups-->Disk-->right click each job-->properties. And looking at Objects-->total size.................and Files--> Backup Size. IN almost every instance the backup size is larger than the total size but not in every instance.

I realize that taking the df -h is the most accurate option and thank you for clarifying that. IM a bit lost on all these other backup size numbers as they dont match...not even close....to the df -h. I guess i will focus on using my df -h numbers in planning for future storage and take that into consideration when using the calculator.

To confuse things even more, we have backup items that we dont really need to offload via a sobr and would like to keep those local....and the only way to do that is to have another repo that is not part of the offloading SOBR. These are test/dev vms that are less critical and while they are still important, we would like to save on the cost of storing those on a capacity tier and getting the bill for that each month when it is not needed. The hard part is trying to figure out how much storage those jobs will need so we can make sure we get the correct size storage for those backups as well. In the end we will need the same size storage we have now but would like to split it out so only the most important items are being copied to the capacity tier and all other jobs stay local to save cost as they are not as critical.
Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 18 guests