It might, what is the random R/W I/O performance of your target storage? Synthetic fulls will take longer that actual fulls in most environments. How long does a real full backup of that data take?
Is this a synthetic full with transform or just a normal synthetic full? Is the the first run after the V6 upgrade? How big were your incremental backups prior to the run starting?
Hi,
It's a synthetic full with transform. Yes, it's the first run. The initial run was full and only one incremental. I configured the Job to run Sun-Thursday Incremental. Friday, full Synthetic. The V6 is a new installation not an upgrade, I preferred to run a new setup rather than upgrading the existing V5.
That does sound like a pretty long time for the amount of data. Is there any other activity on the target storage? Are there multiple synthetic jobs running?
hi,
when the job was running there was one job running on at job nother vm 800 GB synthetic as well. that job finished but on this vm still running 99% Transform 50%.
1. Exit out of Backup and Replication Console
2. Stop all Veeam Services
3. Open the Task Manager and click on Process Tab
4. Kill all Veeam Services and Veeam Agents
5. Open a VSphere Client
6. Check the Snapshot Manager and the Datastore of the VM in question for any Open SnapShots
7. If there any snapshots, manually create a snap and then hit the "Delete All" button inside the Snapshot Manager
8. Restart all Veeam Services
9. Open Backup and Replication Console.
10. If job is still stuck, then reboot Veeam box
It is still running, the file is still being wrote to. I am going to wait it out and see where it is in the morning. It did the same with several VMs and they eventually finished.
Well, it finally finished. It sat on "Transform 85%" around 12 hours. Maybe it is the server, but I don't remember having these issues with version 5.
VMware Backup job: server
Created by at 3/1/2012 4:43:59 PM. Success
1 of 1 VMs processed
Saturday, March 03, 2012 7:15:02 PM
Success 1 Start time 7:15:02 PM Total size 925.0 GB Backup size 61.2 GB
Warning 0 End time 7:12:19 AM + 2 Data read 84.3 GB Dedupe 1.0x
Error 0 Duration 35:57:16 Transferred 61.1 GB Compression 1.3x
Details
Name Status Start time End time Size Read Transferred Duration Details
storage Success 7:16:31 PM 10:11:04 PM 925.0 GB 84.3 GB 61.1 GB 2:54:32
Okay, I defragged the vm and ran backups, the defrag caused the incremental to be rather large. Now though everything seems to be running great. The last incremental only took 10 minutes.
Any update on this? I'm down to 12 days of a 30 day trial and had my first Saturday Transform this past weekend where the backup job tried to "Transform previous full backup chains into rollbacks" and it took 25 hours and 25 minutes to get that one step done! Normal backup times are 1.5-2hrs for this job (incrementals) and I think about 2.5-3hrs full (first time).
Hmm, I hope I can get my trial extended, 30 days isn't enough so far to test everything under Hyper-V (current environment) and VMWare (what we are hoping to switch to in the next 3-6 months). Too many weird and wonky bugs to work through
P.S. Backup is "Processing" about 800GB of data (Hyper-V VMs). VBK file is around 400GB in size. VRBs are around 35-40GB each.
512mb DDR3 cache RAID6 12 x 3.5" 7krpm 2TB drives (local storage on a Dell PE box). Veeam is on this same server.
I still have Backup Exec on there and its constantly doing junk at about 2MB/sec for the dedupe folders but I'm in the process (as I write this) of removing this from the equation...
It just really seems like a small file (400GB isn't that big...) for it to be taking 25hrs even on this storage.
Bottleneck is always around 1-3% target, 99% source for the backup of the actual VMs.
That would be perfect setup to investigate the issue deeply. What sort of I/O are you seeing during transform, is it very low? How much RAM does the system has, and all the available RAM occupied by the system cache? Are you using Windows 2008 or later for that server? Because I have just posted an update to old topic about performance degradation issue in the VMware subforum, which might be related.
Unfortunately I was not monitoring it during the transform this Saturday (I assumed it would magically work nice and faster without any issues, well, that and I was too tired/lazy to log in from home over the weekend to check things out lol).
Server has 24GB RAM, usually around 10% used. Windows 2008 R2 latest SP1/patches/fixes.
If that other degradation issue is related let me know
We have a Veeam job which is 3x VMs, totaling 3tb in size. In VBK land, 1.8tb. Each daily VIB is around 70gb.
All runs fine, except the transforms take a loooong time. 72 hours or so from dailys into a full.
The Veeam backup repo has 16gb of RAM, CPU usage barely gets above 3% during the transforms. Storage is a shared HP MSA. Fast disk, really fast Veeam backups to this disk, slow transforms. Can't work out if it's the sheer size of the data, or if this is unusually slow.