Comprehensive data protection for all workloads
Post Reply
olafurh
Service Provider
Posts: 25
Liked: 16 times
Joined: Oct 29, 2014 9:41 am
Full Name: Olafur Helgi Haraldsson
Location: Iceland
Contact:

Largest volumes - are we just scared?

Post by olafurh »

Hi,

I have been using Veeam for a few years+++ and I would like to know what other people are doing with repositories.

One question which I get a lot from my customer and my collages is what is the max size of a volume you would go with.

Back in the days, I would not recommend anything over 20-64TB using NTFS /l mainly because of the issue of moving data around, but with more mature filesystems, and now overcoming performance limitations in ReFS and introduction of common 25-100Gbit/s ethernet I'm seeing the opportunity to grow volumes even bigger (512TB-1PB) to gain as much performance and savings (block clone) using ReFS.

What are your thoughts about the one terabyte to one gigabyte of RAM ratio in repositories, for example, RAM is cheap today and it's not an issue to put a 1TB of ram in a repository server running 1PB of storage but is it really needed?

Why should we split a multi TB/PB raid60 array up in multiple storage volumes just to serve an "old" limitation of a filesystem, just because?

olafurh
Mike Resseler
Product Manager
Posts: 8045
Liked: 1263 times
Joined: Feb 08, 2013 3:08 pm
Full Name: Mike Resseler
Location: Belgium
Contact:

Re: Largest volumes - are we just scared?

Post by Mike Resseler »

Olafur,

The 64 TB limitation always has been because of the VSS limitations for a volume. If you wanted to use data deduplication on that volume, or needed to take a snapshot of it, you couldn't get above that 64 TB. That said, my personal feeling is still to keep volumes smaller but just use multiple volumes. It might be my "old-school" way of thinking and my experience in repairing volumes and such that doesn't like larger volumes :-). On the other hand, in conversations I have with prospects, partners, customers and so on I indeed see that people are starting to use larger and larger volumes, indeed because of ReFS (and linux alternatives to it). So many of the issues I used to have in the older days seem to be of no issue anymore
Just my 2 cents
dellock6
VeeaMVP
Posts: 6139
Liked: 1932 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: Largest volumes - are we just scared?

Post by dellock6 »

olafurh wrote:Why should we split a multi TB/PB raid60 array up in multiple storage volumes just to serve an "old" limitation of a filesystem, just because?
Because you don't want a huge single failure domain which in by the way the location where your backup files are. We all talk about the 3-2-1 rule, and I agree that proper protection of those volumes should be proper secondary copy, but reality is that not every backup has a secondary copy (it's already a secondary copy in the eye of many) and even if it's so, restoring a primary volume from the secondary may take time.

When discussed this with other providers I usually see half-PB as a limit were I may consider to split repositories, probably even less. Not because it cannot be bigger, but because the volume would be then "too" big for me. Then, obviously other considerations may take place, like rack space, cooling and so on, since I think a 512TB storage machine isn't too much different in terms of datacenter usage compared to a 1PB machine, while it holds half of the data.

Happy to discuss this up there in a month by the way ;-)
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
nmdange
Veteran
Posts: 527
Liked: 142 times
Joined: Aug 20, 2015 9:30 pm
Contact:

Re: Largest volumes - are we just scared?

Post by nmdange »

One thing I like to do is not let the volume spread out over multiple JBOD enclosures, so if an enclosure has a problem, the volume would go offline 100% and not have some disks online and some offline.
Post Reply

Who is online

Users browsing this forum: slackhouse and 127 guests