We have our production fileserver in vmware, and until some time back, all the drives were direct iscsi targest from the san, and we used TSM to backup the data.
Veeam did backup of the fileserver it self.
Due to the san, we could not safely expand the drives with data on them (openfiler), so now we are finally on a Hitachi san that's a bit more enterprise.
We started creating vmdk's to move the data from the iscsi targets to the fileserver it self, so veeam could manage the backup.
We managed to move about 1/3 of the data to vmdk's.
Now, 2 weeks ago, a colleague started moving more targets in, and veeam was not too happy about this.
After some days running the backup job, it failed.
My colleague then removed the newly synced in data, and still the job failed, but now with a new error.
(block empty but supposed to be full, or something like that).
Veeam support checked the logs I uploaded, and concluded that we had to run a new active full backup.
We had about 50TB free space so according to how much real data the vm had, it should be fine, especially after removing the newly synced data.
It seems that veeam considers the drives that windows reports as empty, still containing data, and it still reads all the drives, and copies data from the empty ones...
After reading 71.6 TB and transferring 44.9TB, the job failed again, due to space issues.
It seems to have restarted the job again, and it did not clean up after the previous failed job...
We are now excluding all data drives, and moving back to TSM for the data drives, and letting veeam do backup of the c: only.
The question now is, how can we free the 45TB it has used on a failed backup?
Storage is a bit too expensive to waste almost 50TB..