Host-based backup of VMware vSphere VMs.
Post Reply
jja
Enthusiast
Posts: 46
Liked: 8 times
Joined: Nov 13, 2013 6:40 am
Full Name: Jannis Jacobsen
Contact:

Issues with large vm backup #02053266

Post by jja »

Hey!

We have our production fileserver in vmware, and until some time back, all the drives were direct iscsi targest from the san, and we used TSM to backup the data.
Veeam did backup of the fileserver it self.

Due to the san, we could not safely expand the drives with data on them (openfiler), so now we are finally on a Hitachi san that's a bit more enterprise.
We started creating vmdk's to move the data from the iscsi targets to the fileserver it self, so veeam could manage the backup.

We managed to move about 1/3 of the data to vmdk's.

Now, 2 weeks ago, a colleague started moving more targets in, and veeam was not too happy about this.
After some days running the backup job, it failed.
My colleague then removed the newly synced in data, and still the job failed, but now with a new error.
(block empty but supposed to be full, or something like that).

Veeam support checked the logs I uploaded, and concluded that we had to run a new active full backup.
We had about 50TB free space so according to how much real data the vm had, it should be fine, especially after removing the newly synced data.

It seems that veeam considers the drives that windows reports as empty, still containing data, and it still reads all the drives, and copies data from the empty ones...
After reading 71.6 TB and transferring 44.9TB, the job failed again, due to space issues.

It seems to have restarted the job again, and it did not clean up after the previous failed job...

We are now excluding all data drives, and moving back to TSM for the data drives, and letting veeam do backup of the c: only.

The question now is, how can we free the 45TB it has used on a failed backup?
Storage is a bit too expensive to waste almost 50TB..

-J
Andreas Neufert
VP, Product Management
Posts: 6742
Liked: 1407 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Issues with large vm backup #02053266

Post by Andreas Neufert »

Hi Jannis,
thanks for the request. Hope that I can help you here.

Based on your support request, you had block cloned the entire volume which dependent on the used solution create as well the blocks on the VMware side (full disk change).

In any case Veeam will read out all data at initial full that VMware Reports as used. => 100% as your cloning method created 100% block usage?

As Veeam works on the block level and do not "see" if the files are used or not we need to read our 100% of the data. And even if a disk looks empty, it can hold deleted data on it that was only metadata deleted from filesystem.
Veeam has an option (default) to not backup those data for NTFS, but not for other filesystems.

If you had already a backup and extended the disk in VMware, the Change Block Tracking informations from VMware are invalid and we need to read 100% of the data again. This is by design of VMware.

You can check on the Backup & Replication / Backups / Disk menue if you can delete one of the older restore points as needed.

In case of the "emty" not "empty" disk, you can try to run tools like zerofree (usually installed with open vm tools) to change the "free blocks" as unused.
Post Reply

Who is online

Users browsing this forum: No registered users and 40 guests