Comprehensive data protection for all workloads
Post Reply
DaFresh
Enthusiast
Posts: 64
Liked: 1 time
Joined: Aug 30, 2011 9:31 pm
Full Name: Cedric Lemarchand
Contact:

NTFS block size for 10To+ repository, 4k vs 64k

Post by DaFresh »

Hello,

I am about to move away a local Veeam backup repository (on the Veeam server, which does VCENTER too) to a dedicated VM in order to spread the work load.
I am asking myself if it would be a better choice to use the 64K in place of the default 4k for the NTFS partition. I personally see only advantages :

- Veeam backup only produce huge files, for which 64k is typically for (big files)
- better performance in read : more data in one read action vs 4k block size (doesn't mean 16x speed ... but less work for the storage layer )
- better performance in write : more data in one write action vs 4k block size (same above)
- gain in storage use : 16x less metadata (maybe negligible, or not because of the size of the pool)

any pros or cons ? advices will be much appreciated.

Thx,

Cédric
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: NTFS block size for 10To+ repository, 4k vs 64k

Post by dellock6 »

I would also add as an advantage the possibility to go over 16 TB of a single partition by using a larger block size. Since you are already at 10, a larger block size will avoid you to rebuild the partition at some point, if you think about expanding it in the future.

Honestly, the only con is the inability tu use features like encryption if you do not use the default block size, but I'm not sure is a compelling problem when a partition is used as a Veeam repository. The loss of free space when saving few small files is neglectible. Only, be sure to align also the block size of the underlying storage to have even better performances.

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
DaFresh
Enthusiast
Posts: 64
Liked: 1 time
Joined: Aug 30, 2011 9:31 pm
Full Name: Cedric Lemarchand
Contact:

Re: NTFS block size for 10To+ repository, 4k vs 64k

Post by DaFresh »

Hi Luca,

Nice catch about the 16To limit, your are right, 16To is not so far.
The underlaying storage used is ZFS (over NFS), which as 128k bs by default, so it normally will *sticks* well.

No cons, especially from the Veeam Team ?

A subsidiary question would be : do I need to use, and why, the backup option "Local Target (16TB+ backup files)" ? Could I set this option on a new backup job "mapped" on the old one that originally haven't this option set before ?

Thx,

Cédric
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: NTFS block size for 10To+ repository, 4k vs 64k

Post by dellock6 »

Setting the dedup size in a Veeam job depends on the expected size of the backup, but if you are using the latest 7.0 release and you are using a 64 bit windows machine to run veeam proxy, the memory management capabilities of the 64 bit kernel will make this decision not so important, Veeam will be able to run at the maximum dedup level even on large files. Cons are the backup completion time will be longer, and the memory requirement to store all the hashes of deduped blocks will be higher (I do not have numbers...).
Be careful, you will have to run a full backup if you change the deduplication level.

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
yizhar
Service Provider
Posts: 182
Liked: 48 times
Joined: Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz
Contact:

Re: NTFS block size for 10To+ repository, 4k vs 64k

Post by yizhar »

DaFresh wrote: The underlaying storage used is ZFS (over NFS), which as 128k bs by default, so it normally will *sticks* well.
Hi.
I think that there might be needless encapsulation with your setup.

If the ZFS storage is hosted on a Linux machine, why not set it up as a Veeam Linux repository?
This can give you better performance and stability, by reducing unneeded layers (ntfs over iscsi/cifs over zfs instead of direct to zfs).

What kind of storage device is it?

Yizhar
ccatlett1984
Enthusiast
Posts: 83
Liked: 9 times
Joined: Oct 31, 2013 5:11 pm
Full Name: Chris Catlett
Contact:

Re: NTFS block size for 10To+ repository, 4k vs 64k

Post by ccatlett1984 » 1 person likes this post

DaFresh wrote:Hi Luca,

Nice catch about the 16To limit, your are right, 16To is not so far.
The underlaying storage used is ZFS (over NFS), which as 128k bs by default, so it normally will *sticks* well.

No cons, especially from the Veeam Team ?

A subsidiary question would be : do I need to use, and why, the backup option "Local Target (16TB+ backup files)" ? Could I set this option on a new backup job "mapped" on the old one that originally haven't this option set before ?

Thx,

Cédric
To answer your 2nd question, no you don't need to use that setting. That is for when you have a single backup job that will cross the 16TB size limit, not for the size of your repository.
DaFresh
Enthusiast
Posts: 64
Liked: 1 time
Joined: Aug 30, 2011 9:31 pm
Full Name: Cedric Lemarchand
Contact:

Re: NTFS block size for 10To+ repository, 4k vs 64k

Post by DaFresh »

yizhar wrote: If the ZFS storage is hosted on a Linux machine, why not set it up as a Veeam Linux repository?
This can give you better performance and stability, by reducing unneeded layers (ntfs over iscsi/cifs over zfs instead of direct to zfs).

Yizhar
Hello Yizhar,
Many reasons for that :

- configuration consistency : I like to keep things serparated : client network is for VM, storage network is for *storage* (eg iSCSI initiator in my case). If I want to use this NAS as a Linux repo for Veeam, I need to add the network cnnectivty to both Veeam / Proxy, I don't feal comfortable with that.
- storage layer : I have more confidence in NFS to handle the workload than the SSH/PERL (IMHO), but I use Linux reposotory for other things like offsite backup and it works pretty well.
- this data store is used for others purpose too, so use the Linux Reposotory doesn't remove the datastore from the picture ;-)
DaFresh
Enthusiast
Posts: 64
Liked: 1 time
Joined: Aug 30, 2011 9:31 pm
Full Name: Cedric Lemarchand
Contact:

Re: NTFS block size for 10To+ repository, 4k vs 64k

Post by DaFresh »

ccatlett1984 wrote: To answer your 2nd question, no you don't need to use that setting. That is for when you have a single backup job that will cross the 16TB size limit, not for the size of your repository.
You mean the full size of the job of only one file in the pool repository ?
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: NTFS block size for 10To+ repository, 4k vs 64k

Post by dellock6 »

Say it really easy: the size of your VBK file.

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
DaFresh
Enthusiast
Posts: 64
Liked: 1 time
Joined: Aug 30, 2011 9:31 pm
Full Name: Cedric Lemarchand
Contact:

Re: NTFS block size for 10To+ repository, 4k vs 64k

Post by DaFresh »

Ok thank for this clarification, mine does approx 8To now.
I understand that changing this option really apply if a full job is done, which mean in my case add 8To to the total job used space ... not really nice, but the sooner it is done, the less place it will takes, even if this overload will disappear after a full retention policy cycle, right ?
dellock6
VeeaMVP
Posts: 6166
Liked: 1971 times
Joined: Jul 26, 2009 3:39 pm
Full Name: Luca Dell'Oca
Location: Varese, Italy
Contact:

Re: NTFS block size for 10To+ repository, 4k vs 64k

Post by dellock6 »

Exactly, you change the dedup size, recreate a new full, it will stays there until the retention expires.

Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software

@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
yizhar
Service Provider
Posts: 182
Liked: 48 times
Joined: Sep 03, 2012 5:28 am
Full Name: Yizhar Hurwitz
Contact:

Re: NTFS block size for 10To+ repository, 4k vs 64k

Post by yizhar »

Hi.

This is yet one of the advantages for dividing backups to several jobs, and having for example 4*2tb VBK files instead of 1*8tb.
You can easier create full backups for each VBK at a time.

Yizhar
tsightler
VP, Product Management
Posts: 6035
Liked: 2860 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: NTFS block size for 10To+ repository, 4k vs 64k

Post by tsightler » 2 people like this post

DaFresh wrote:storage layer : I have more confidence in NFS to handle the workload than the SSH/PERL (IMHO), but I use Linux reposotory for other things like offsite backup and it works pretty well.
Just wanted to chime in and correct what appears to be a misconception here. Veeam uses SSH+Perl to communicate with the Linux repository and start a VeeamAgent process there. SSH+Perl is not actually used for the data transfer at all, the actual transfer is handled by the started VeeamAgent process which is effectively the exact equivalent the the VeeamAgent process on Windows. The only real different between the two is that with Windows we have a "Transport" service which runs all the time and has a control channel which the Veeam server can talk to and tell it to start VeeamAgents, while on Linux the VeeamAgent is non-persistent and is installed/removed via SSH and a small perl wrapper script at each run. This technology is very proven as it was used by Veeam all the way back in V1 when we performed backups via the ESX Linux based service console.

So indeed, in both cases you will be using a VeeamAgent, you're only changing where it runs. Running the VeeamAgent directly on the Linux box will almost certainly provide the best performance as we send the data stream from the proxy, directly the to VeeamAgent on the Linux server, which then writes to the local filesystem, no other protocols involved. While NFS is perhaps not a "chatty" as SMB, it's still not nearly as efficient as the Veeam datastream between agents.
Post Reply

Who is online

Users browsing this forum: Baidu [Spider], Google [Bot], s.arzimanli and 93 guests