Can you please help me understand the logic you use when dealing with tape libraries and multiple tape drives? Consider the following scenario for a moment please. I will change the name of the jobs to protect the guilty. When I do full backups I have multiplexing turned on so that I can use both the drives in the Dell TL2000 library that I have. During the week for the incrementals, I have it turned off so that I ensure that tapes are full and not left half empty. I will demonstrate the below with 1 job to demonstrate my point.
Day 1: Multiplexing ON: Multiple Active full backups jobs for multiple VM's run and write these files to tape via dedicated tape jobs.
Day 2: Multiplexing OFF: Daily backup runs on Library 1 it picks CJH018L6 in Drive1. After backup CJH018L6 has 350gb/2.3tb free
I notice that there is CJH019L6 which has 2.2tb/2.3tb free. Knowing that my backups take about 400gb a day, if i left Veeam to its own devices it would fill CJH018L6 and grab another spare tape which would only have 50gb used. Trying to help Veeam out here, I eject CJH018L6 from Drive1 and load CJH019L6 into Drive1 which has 2.2tb free
To make sure that it is going to use the tape that is in there, i run the backup job to create another VIB and then run the subsequent tape job again. Instead of writing to CJH019L6 which is in Drive1 it loads CJH018L6 into Drive2 and then completes the backup completely ignoring CJH019L6.
Day 2 job log summary, you can see it used Drive 1 with tape CJH018L6
Code: Select all
23/12/2019 11:34:40 PM :: Building source backup files list started at 23/12/2019 11:34:40 PM
23/12/2019 11:34:41 PM :: New backup P2-ARBBackup-AccountsPSD2019-12-23T232939_B4B3.vib will be placed into the media set
23/12/2019 11:34:41 PM :: Source backup files detected. VIB: 1
23/12/2019 11:34:41 PM :: Queued for processing at 23/12/2019 11:34:41 PM
23/12/2019 11:34:41 PM :: Required backup infrastructure resources have been assigned
23/12/2019 11:34:42 PM :: Waiting for tape infrastructure resource availability
23/12/2019 11:50:32 PM :: Using tape library IBM 3573-TL C.30
23/12/2019 11:50:35 PM :: Drive 1 (Server: ARBBACKUP, Library: IBM 3573-TL C.30, Drive ID: Tape1) locked successfully
23/12/2019 11:50:38 PM :: Current tape is CJH018L6
23/12/2019 11:50:38 PM :: New tape backup session started, encryption: disabled
23/12/2019 11:50:42 PM :: Processing incremental backups started at 23/12/2019 11:34:38 PM
23/12/2019 11:50:43 PM :: Processing P2-ARBBackup-AccountsPSD2019-12-23T232939_B4B3.vib
23/12/2019 11:50:59 PM :: 0 folders and 1 files have been backed up
23/12/2019 11:50:59 PM :: Busy: Source 11% > Proxy 17% > Network 14% > Target 99%
23/12/2019 11:50:59 PM :: Primary bottleneck: Target
23/12/2019 11:50:59 PM :: Network traffic verification detected no corrupted blocks
23/12/2019 11:50:59 PM :: Processing finished at 23/12/2019 11:50:59 PM
Code: Select all
24/12/2019 10:07:06 AM :: Building source backup files list started at 24/12/2019 10:07:06 AM
24/12/2019 10:07:06 AM :: New backup P2-ARBBackup-AccountsPSD2019-12-24T100307_40D7.vib will be placed into the media set
24/12/2019 10:07:06 AM :: Source backup files detected. VIB: 1
24/12/2019 10:07:06 AM :: Queued for processing at 24/12/2019 10:07:06 AM
24/12/2019 10:07:06 AM :: Required backup infrastructure resources have been assigned
24/12/2019 10:07:07 AM :: Using tape library IBM 3573-TL C.30
24/12/2019 10:07:10 AM :: Drive 2 (Server: ARBBACKUP, Library: IBM 3573-TL C.30, Drive ID: Tape0) locked successfully
24/12/2019 10:07:11 AM :: Loading tape CJH018L6 from Slot 3 to Drive 2 (Server: ARBBACKUP, Library: IBM 3573-TL C.30, Drive ID: Tape0)
24/12/2019 10:08:03 AM :: Current tape is CJH018L6
24/12/2019 10:08:04 AM :: New tape backup session started, encryption: disabled
24/12/2019 10:09:07 AM :: Processing incremental backups started at 24/12/2019 10:07:03 AM
24/12/2019 10:09:09 AM :: Processing P2-ARBBackup-AccountsPSD2019-12-24T100307_40D7.vib
24/12/2019 10:09:17 AM :: 0 folders and 1 files have been backed up
24/12/2019 10:09:18 AM :: Busy: Source 11% > Proxy 7% > Network 11% > Target 99%
24/12/2019 10:09:18 AM :: Primary bottleneck: Target
24/12/2019 10:09:18 AM :: Network traffic verification detected no corrupted blocks
24/12/2019 10:09:18 AM :: Processing finished at 24/12/2019 10:09:18 AM
So, I would like to know... What causes this logic? Does Veeam try to keep writing to the same tape it used during its last run instead of realising that there is a tape already in drive which is of the correct media set?
I logged this with support a little while ago but got no joy no-one really knew what was going on with any level of confidence.
Cheers,
Aaron