Discussions related to exporting backups to tape and backing up directly to tape.
Post Reply
jaceg23
Influencer
Posts: 12
Liked: never
Joined: Mar 30, 2023 7:20 pm
Full Name: Jace G
Contact:

Veeam to Tape - Best Performance Ideas

Post by jaceg23 »

I'm new to tape backup in general. We're trying to figure out how to dump data as fast as possible to tape via Veeam. We have backups we'd like to put on tape for archive purposes (monthly backups). We tried "files to tape" and "backup to tape" options but seems the performance is lacking. Best seen is ~100MB/s. Since the Veeam backups are already compressed, LTO-8 theoretical write speed in this scenario would be 360MB/s, if I understand correctly. We would like to hear from the community on what they think is the best way to get the performance to write to these tapes as fast as possible. Thanks in advance!

Veeam Server (v11a)
PowerEdge R440
Xeon Sliver 4210R (10c/20t)
48GB RAM
Windows Server 2019

PROXY 1
VMware Virtual Machine
6 CPUs
32GB RAM
Windows Server 2008 R2

PROXY 2
PowerEdge R510
Xeon X5672 (4c/8t)
16GB RAM
Windows Server 2008 R2

PROXY 3 (also Tape Server)
PowerEdge R510
2x Xeon E5620 (4c/8t ea.)
ATTO ExpressSAS H1244 GT SAS
Windows 10 Pro for Workstations 21H2

REPO
Synology RS3617xs+
Xeon D-1531 (6c/12t)
8GB RAM
2 volumes in RAID6

TAPE DRIVE
Quantum SuperLoader 3 w/ IBM LTO-8 SAS drive

All items connected via 10Gbps Fiber
Tape drive connected to Tape Server via 12Gbps SAS
HannesK
Product Manager
Posts: 14968
Liked: 3159 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by HannesK »

Hello,
it sounds like the repository (Synology) is too slow. Which protocol do you use? If SMB is used, then iSCSI might be worth a try.

The two volumes in RAID6... how many disks are there per volume? How fast can you copy from a volume to one of the other machines? I guess the speed will also be around 100MByte/s

File-to-tape tests are a good way. With backup-to-tape I would make sure that the software is not creating virtual synthetic fulls (if you go with default settings, then it's unlikely that this happened). Virtual synthetics can be a bit slower than normal backup-to-tape.

Best regards,
Hannes
jaceg23
Influencer
Posts: 12
Liked: never
Joined: Mar 30, 2023 7:20 pm
Full Name: Jace G
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by jaceg23 »

The backup repository is defined using SMB. There are 12 disks total, one storage pool, two volumes. Not really "disks" per volume. Not sure how iSCSI comes into play here. We used an LTFS utility to copy from the volume to the tape and we were able to get to 2Gbps.
karsten123
Service Provider
Posts: 509
Liked: 126 times
Joined: Apr 03, 2019 6:53 am
Full Name: Karsten Meja
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by karsten123 »

I would recommend one RAID6 with the 12 drives you have. What is the network configuration in detail on your Rackstation?
Which machine is your SMB gateway? Why Server 2008 and Windows 10? Are firmware and drivers up to date?
karsten123
Service Provider
Posts: 509
Liked: 126 times
Joined: Apr 03, 2019 6:53 am
Full Name: Karsten Meja
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by karsten123 »

I recommend to test with diskspd on the tape server against the smb repository. Test scenario should be the worst case read scenario. KB2014
jaceg23
Influencer
Posts: 12
Liked: never
Joined: Mar 30, 2023 7:20 pm
Full Name: Jace G
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by jaceg23 »

I cannot do anything about the RAID6, it was like that when I started. The network config is 3/4 1Gbps connections in a bond @3Gbps. Then two (LAN 5 & 6) on the "SAN" network which we refer, at 10Gbps each. One of the proxies is the mount server for the repository (is this the SMB gateway to which you refer?). Again, 2008 R2 is what was on them when here, Windows 10 is on the tape server because it was cheaper than Windows Server, plus the SAS card supports W10 with drivers direct from manufacturer. Firmware/drivers up to date.

For the diskspd, that was my next attempt.
karsten123
Service Provider
Posts: 509
Liked: 126 times
Joined: Apr 03, 2019 6:53 am
Full Name: Karsten Meja
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by karsten123 »

Is the fqdn, hostname or ip of the Rackstation, configured in the veeam console, on the SAN or the gigabit bond?
The gateway server is set at the point you set the shared folder and credentials.
Please keep in mind, that data travels from repository to gateway and then to the data mover of the tape server (as far as i remember). So it is maybe worth to change gateway server for this scenario.
Your Rackstation is not your vSphere datatore storage, right?
jaceg23
Influencer
Posts: 12
Liked: never
Joined: Mar 30, 2023 7:20 pm
Full Name: Jace G
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by jaceg23 »

The rackstation (backup repo) is setup in Veeam.

Repo Name: RackRepo2 10G
Type: SMB
Host: PROXY 2 (see original post)
Path: \\SAN IP ADDRESS\share

Gateway Server (from settings): PROXY 2
Mount Server (from settings): PROXY 2 (vPower NFS enabled, why for I don't know why)

It is not the vSphere datastore, only where we store backups
karsten123
Service Provider
Posts: 509
Liked: 126 times
Joined: Apr 03, 2019 6:53 am
Full Name: Karsten Meja
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by karsten123 »

I would change the SMB gateway to proxy 3 and follow the advice from Hannes to avoid synthetic fulls for the tape job. To do so, you have to time your full backup happened directly before the tape job starts.
Anyone some other ideas?
HannesK
Product Manager
Posts: 14968
Liked: 3159 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by HannesK »

Hello,
iSCSI, because SMB is the worst protocol from a performance / reliability perspective :-)

this
The network config is 3/4 1Gbps connections in a bond @3Gbps
does not match this
All items connected via 10Gbps Fiber
But if it's a 1 Gbit/s network (somewhere in the traffic flow), then the 100MByte/s fit pretty well.
We used an LTFS utility to copy from the volume to the tape and we were able to get to 2Gbps
that would be 2x the speed you see with Veeam. I don't know, how they did it, but bonding usually does not work with only one IP address combination involved.

Best regards,
Hannes
jaceg23
Influencer
Posts: 12
Liked: never
Joined: Mar 30, 2023 7:20 pm
Full Name: Jace G
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by jaceg23 »

@HannesK, backup traffic is going over the 10Gbps interfaces because the "RackRepo2 10G" is not on the same network as the 3-bonded 1Gbps interfaces. It is defined as SMB using the 10Gbps interface over SMB. The LTFS test that we got we did over the 10Gbps interfaces, not the 1Gbps interfaces. We only informed about the 1Gbps interfaces because @karsten123 asked about network config on the Synology.

As far as iSCSI goes, we use that from our hosts & san perspective, but not in Veeam, I'm not sure how that's achieved. If anyone wants to shed some light on that, perhaps I can setup a test environment and test using iSCSI vs SMB (we already know SMB is slower than <put favorite word here>).
karsten123
Service Provider
Posts: 509
Liked: 126 times
Joined: Apr 03, 2019 6:53 am
Full Name: Karsten Meja
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by karsten123 »

you have to decide if windows or linux and then plug in the iscsi lun and use fast-clone filesystems like ReFS or XFS.
make sure you setup proper mpio.
jaceg23
Influencer
Posts: 12
Liked: never
Joined: Mar 30, 2023 7:20 pm
Full Name: Jace G
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by jaceg23 »

Still not clear but will look into it. Thanks for the info.
karsten123
Service Provider
Posts: 509
Liked: 126 times
Joined: Apr 03, 2019 6:53 am
Full Name: Karsten Meja
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by karsten123 »

can you be clear what are your struggles?
jaceg23
Influencer
Posts: 12
Liked: never
Joined: Mar 30, 2023 7:20 pm
Full Name: Jace G
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by jaceg23 »

Hosts use iscsi to connect to SAN. Not sure how to setup Veeam to use iscsi for backups and for tape. Currently the backup repos are defined using SMB. If I understand correctly, iscsi for backups and then using for tape from tape server would make this much faster. I'm still willing to bet it's the repo from which we fetch backups but if changing our backup strategy fixes this, we'll redesign our solution from scratch and move forward.
karsten123
Service Provider
Posts: 509
Liked: 126 times
Joined: Apr 03, 2019 6:53 am
Full Name: Karsten Meja
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by karsten123 »

ok. to verify if iscsi could be faster, the following could be your plan:
- create a iscsi lun and target on your synology nas (best practices is thin provisioned on a btrfs volume)
- connect this lun to proxy2 and fomat it with ntfs (best practices is refs with 64k cluster size but that is not a thing with 2008)
- add a direct attached repository and select proxy2 as server and your newly created partition
- create a test job and target it to the newly created windows repository
- create a tape job as a secondary job

any questions?
jaceg23
Influencer
Posts: 12
Liked: never
Joined: Mar 30, 2023 7:20 pm
Full Name: Jace G
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by jaceg23 »

working on it...
jaceg23
Influencer
Posts: 12
Liked: never
Joined: Mar 30, 2023 7:20 pm
Full Name: Jace G
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by jaceg23 »

All I can say is I'm very pleased with the outcome of this test.

VMware host > iscsi lun
Virtualized Veeam Server > direct attached iscsi lun (as backup repo) > formatted ReFS

We did a couple of tests.
1. Veeam backups compressed, writing to backup repo, processing rates around 275MB/s. Backup to Tape job around same thoughput.
2. Veeam backups un-compressed, writing to backup repo, processing rates around 250MB/s. Backup to Tape job around same thoughput.

Very much faster than what our current setup will do. I'm curious if the unit hosting the iscsi lun's was SSD or NVMe and not HDD what it would be like.

I want to thank both of you, karsten123 and HannesK, for opening my eyes to a better way.

With this type of solution however, we would have to totally redesign the way we do backups but it would be so much more efficient.
jaceg23
Influencer
Posts: 12
Liked: never
Joined: Mar 30, 2023 7:20 pm
Full Name: Jace G
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by jaceg23 »

I should add probably that the veeam server's iscsi initiator was aware of both luns.

The "VMware host > iscsi lun" was formatted VMFS5.
jaceg23
Influencer
Posts: 12
Liked: never
Joined: Mar 30, 2023 7:20 pm
Full Name: Jace G
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by jaceg23 »

Veeam "recommends" 64K on ReFS. But there is no space reclamation on ReFS? There is on NTFS, so what about using 2048K on NTFS? Any thoughts on this?
JannieH
Novice
Posts: 5
Liked: 2 times
Joined: Apr 25, 2018 11:10 am
Full Name: Jannie Hanekom
Contact:

Re: Veeam to Tape - Best Performance Ideas

Post by JannieH »

jaceg23 wrote: Apr 10, 2023 7:17 pm I cannot do anything about the RAID6, it was like that when I started. The network config is 3/4 1Gbps connections in a bond @3Gbps. Then two (LAN 5 & 6) on the "SAN" network which we refer, at 10Gbps each. One of the proxies is the mount server for the repository (is this the SMB gateway to which you refer?). Again, 2008 R2 is what was on them when here, Windows 10 is on the tape server because it was cheaper than Windows Server, plus the SAS card supports W10 with drivers direct from manufacturer. Firmware/drivers up to date.

For the diskspd, that was my next attempt.
I scanned this thread high-level several times but couldn't quite make out whether the tape gateway accesses the storage over the 1Gb network cards or the 10Gb network cards. So apologies if I incorrectly assume 1Gb.

Because it's something I've often seen missed in backup solutions of all kinds over the years, I'd like to add that a "bond" of network adapters doesn't truly aggregate the bandwidth. If you create:
  • just a regular bond in Windows (10, server, doesn't matter.): then transmit traffic for that server will be load-balanced (not aggregated) over the three adapters. Receive traffic (such as from the backup repo) will return over only one NIC, because MAC addresses can only live on one switch port and ARP tables don't like MAC addresses for a particular IP flapping around. Also, transmit traffic will be balanced over the adapters on a per-IP-session basis to prevent out-of-order packet delivery. So traffic for one IP session between two systems can only ever achieve a maximum of 1Gbps, the sum of all receive traffic can only ever be a maximum of 1Gbps, but the sum of all transmit traffic could be greater than 1Gbps.
  • a proper LACP aggregate which requires some config on the switch side: it will behave similarly to the above, with the only difference that return traffic will load-balance similarly over the 3x adapters - still wtih max 1Gbps throughput per IP session though
Thus: don't expect >1Gbps throughput just because you've got multiple network adapters in a bond. This is where iSCSI is usually better from a performance perspective, as iSCSI aggregates connections at a higher level (provided you've got one initiator per adapter), performing true load balancing. NB: do not run iSCSI over NIC teams as it may give unpredictable results.

(PS: SMB3 is generally just fine from a performance perspective, but Veeam cannot truly verify writes to the destination which increases the chances of data corruption/loss going undetected. Not saying that's the case here, but many people have a negative perception of SMB from past experience with entry-level embedded systems where the bottleneck is usually processing capacity, not SMB. Even for higher performing systems, I've often seen people attempt to write to NTFS-formatted portable drives, where the culprit is usually the slow, free version of the Tuxera NTFS-3G driver rather than SMB.)
Post Reply

Who is online

Users browsing this forum: No registered users and 124 guests