Allignment => not needed as Nimble do variable block size deduplication.
Decompress => In most of the cases this will allow the deduplication engine of nimble to reduce the data more, but as the data get´s uncompressed it is 2x the data that the Nimble system needs to process. You can test which is better for your environment and decide... If the Storage is not the bottleneck you can enable this setting and uncompress the data (Overall better dedup on nimble).
Per VM => Yes, enable with all kind of dedup devices as it will allow you to increase the stream count to the storage and will end up in better overall performance.
Inline Dedup... It will help to process less data. The real difference comes at restore. When this is disabled, at reatore we read the metadata only once and keep it in the RAM instead of interacting more randomly with it. So disabling this setting could spead up the restore at deduplication devices.
Compression should we enabled in all cases. If you need uncompressed data at target enable the "uncompress" feature at repository.
Block size: Leave the Local (1MB).... Local 16TB (4MB) will potentially increase backup speed but would reduce restore performance... You can do some backup restore tests to find out which setting is the best one... My recommendation is, to use the default "Local".
All deduplication storage have a penalty at random read. To optimize the interaction with dedup devices at syntetic processing (Merge/Syntetic Fulls) it can be helpful to use some block pointer technologies like DDBoost/Catalyst/ReFS... in your setup you can use a WIndows 2016 Server as Repository and format the backup target storage (Nimble Dedup) with ReFS... This will allow you to use the Fast Merge Feature that we can use for ReFS. https://helpcenter.veeam.com/docs/backu ... tml?ver=95