-
- Enthusiast
- Posts: 81
- Liked: never
- Joined: Nov 06, 2013 3:15 pm
- Full Name: J Cook
- Contact:
Large VM split into multiple jobs
Hi, we have one troublesome file server which creates a full backup VBK file over 17TB in size. This backup takes days to run and often fails near the end, it seems a retry does not pick up where it left off and results in many attempts over days, preventing incremental backups from running during that period.
If I were to break this server down into two jobs and specify half the drives per job. Can both jobs run simultaneously or would the first job to be run lock the VM?
Thanks
If I were to break this server down into two jobs and specify half the drives per job. Can both jobs run simultaneously or would the first job to be run lock the VM?
Thanks
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Large VM split into multiple jobs
Hello,
Yes, you can choose VM`s disks to process and backup them in different jobs. However you cannot process the jobs simultaneously. One job will not start until the other is not finished.
Thanks!
Yes, you can choose VM`s disks to process and backup them in different jobs. However you cannot process the jobs simultaneously. One job will not start until the other is not finished.
Thanks!
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Large VM split into multiple jobs
I'd warn you about possible restore issues though.
-
- Expert
- Posts: 189
- Liked: 27 times
- Joined: Apr 24, 2013 8:53 pm
- Full Name: Chuck Stevens
- Location: Seattle, WA
- Contact:
Re: Large VM split into multiple jobs
I was in a similar situation, and had to bail on using Veeam to back it up. In this case I rely on array snapshots and replication to back it up. Not ideal, but I have no other practical way of backing up a file server with ~20TB worth of data with Veeam. The next time this application is refreshed (OnBase), I'll lobby for breaking it up into several smaller file servers. Or maybe when I can refresh my backup storage array with something that performs better, it'll be different.
Veeaming since 2013
-
- Veteran
- Posts: 7328
- Liked: 781 times
- Joined: May 21, 2014 11:03 am
- Full Name: Nikita Shestakov
- Location: Prague
- Contact:
Re: Large VM split into multiple jobs
How much time did it take to backup 20TB in average?
Have you made a bottleneck analysis?
Have you made a bottleneck analysis?
-
- Expert
- Posts: 189
- Liked: 27 times
- Joined: Apr 24, 2013 8:53 pm
- Full Name: Chuck Stevens
- Location: Seattle, WA
- Contact:
Re: Large VM split into multiple jobs
About a week. (!). The job ran so long that the snapshot commit time was too long and cause service outages, and it often failed.
As I recall, the bottleneck was two things:
1. The VDDK bug avoidance code for kb2042, forcing network mode instead of hot-add when virtual disks with the same name and SCSI device # were detected. nbd is much slower than hot-add.
2. At the time our backup storage array wasn't performing well.
I am hoping that with the upcoming Nimble support with 9.5 we will be able to back this thing up without affecting service, by using array snapshots.
One strategy I tried was to only start the chain with just a couple of the large data volumes, then add a couple more over the next few days, eventually having them all. Didn't work out, though.
Of course, since my deduplicating storage array cannot do forever-forward incrementals, I must do periodic Fulls. If I cannot get a Full backup of this file server in less than 12 hours, there's no point in trying.
As I recall, the bottleneck was two things:
1. The VDDK bug avoidance code for kb2042, forcing network mode instead of hot-add when virtual disks with the same name and SCSI device # were detected. nbd is much slower than hot-add.
2. At the time our backup storage array wasn't performing well.
I am hoping that with the upcoming Nimble support with 9.5 we will be able to back this thing up without affecting service, by using array snapshots.
One strategy I tried was to only start the chain with just a couple of the large data volumes, then add a couple more over the next few days, eventually having them all. Didn't work out, though.
Of course, since my deduplicating storage array cannot do forever-forward incrementals, I must do periodic Fulls. If I cannot get a Full backup of this file server in less than 12 hours, there's no point in trying.
Veeaming since 2013
Who is online
Users browsing this forum: No registered users and 120 guests