-
- Veteran
- Posts: 283
- Liked: 11 times
- Joined: May 20, 2010 4:17 pm
- Full Name: Dave DeLollis
- Contact:
Automatic proxy selection
I am currently running Veeam BR 6.5.0.128 on a physical server with 8 cores and 8GB of RAM. The Veeam server is connected to a FC SAN via FC HBA. The physical server running Veeam is set to run 4 concurrent tasks. I have 14 backup jobs and 1 replication job. I have 5 other proxy servers that are VMs. All jobs are set to "Automatic selection" for their backup proxy choice. All proxies are set to "Automatic selection" for their Transport modes.
Last night 4 jobs ran within 1 hour of each other, starting at 6,7,8,9pm respectively. Veeam chose to run all the jobs on the "Vmware Backup Proxy", which is the physical Veeam server. I guess it makes sense since this server has the most resources and has a physical connection to the SAN. When the replication job started at 9:15pm, the other 4 jobs were still running concurrently. The replication job sat there for over an hour not starting and in the Statistics page, it noted the following: "Waiting for backup infrastructure resources availability".
Is that telling me the replication job was trying to use the same proxy (Veeam physical server) that the other 4 jobs were using? If so, why did it not use the one of the 5 other proxies not in use? Is this typical? What determines which proxy gets used when multiple concurrent jobs are running?
Last night 4 jobs ran within 1 hour of each other, starting at 6,7,8,9pm respectively. Veeam chose to run all the jobs on the "Vmware Backup Proxy", which is the physical Veeam server. I guess it makes sense since this server has the most resources and has a physical connection to the SAN. When the replication job started at 9:15pm, the other 4 jobs were still running concurrently. The replication job sat there for over an hour not starting and in the Statistics page, it noted the following: "Waiting for backup infrastructure resources availability".
Is that telling me the replication job was trying to use the same proxy (Veeam physical server) that the other 4 jobs were using? If so, why did it not use the one of the 5 other proxies not in use? Is this typical? What determines which proxy gets used when multiple concurrent jobs are running?
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Automatic proxy selection
Dave, is there a chance that the replicated VM was already processed by one of the backup jobs at the moment of replication start? Are other proxies capable of hotadd (nothing prevents them from using hotadd, you know, there are some limitations)?
-
- Veteran
- Posts: 283
- Liked: 11 times
- Joined: May 20, 2010 4:17 pm
- Full Name: Dave DeLollis
- Contact:
Re: Automatic proxy selection
I'll double check to see if the running backup job and the replication job kicked off at the same time. The VM that is being replicated is the first one processed in the backup job, so there is a possibility. In the replication job, I am going to exclude the hour in time that the replicated VM is being backed up and see if that helps.
Other than that, I do have 4 clusters and i do not have a proxy VM in each cluster. Sounds like that is the recommended thing to do?
Other than that, I do have 4 clusters and i do not have a proxy VM in each cluster. Sounds like that is the recommended thing to do?
-
- Veteran
- Posts: 283
- Liked: 11 times
- Joined: May 20, 2010 4:17 pm
- Full Name: Dave DeLollis
- Contact:
Re: Automatic proxy selection
OK, I think what I did above fixed that particular issue.
Looking at my system right now, I have 4 concurrent jobs running. Each of them have started 1 hour apart. Looking at each job, it seems like they have backed up a few VMs and are now stuck. Each job is trying to process a hard disk in a VM. Each job is reporting the disk speeds at (0.0KB) 0.0KB read at 0.0KB(CBT). It was in this state for 15 minutes before throughput actually started and the speeds increased. Is it normal for the backups to to "stuck" reading the disk with no throughput for 15 minutes?
Looking at my system right now, I have 4 concurrent jobs running. Each of them have started 1 hour apart. Looking at each job, it seems like they have backed up a few VMs and are now stuck. Each job is trying to process a hard disk in a VM. Each job is reporting the disk speeds at (0.0KB) 0.0KB read at 0.0KB(CBT). It was in this state for 15 minutes before throughput actually started and the speeds increased. Is it normal for the backups to to "stuck" reading the disk with no throughput for 15 minutes?
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Automatic proxy selection
Actually, no need to do that as Veeam B&R allows parallel jobs against the same VM - the job that comes later will automatically skip the VM being backed up by another job and retry it when it is available for backup (this is just what you observed in your case).Daveyd wrote:In the replication job, I am going to exclude the hour in time that the replicated VM is being backed up and see if that helps.
The main requirement for hotadd is that the host running backup proxy server VM must have all datastores where protected VMs' disks reside connected to it. If your backup proxy has access to the cluster datastore, then hotadd is most likely possible (with the limitations referred above in mind).Daveyd wrote:Other than that, I do have 4 clusters and i do not have a proxy VM in each cluster. Sounds like that is the recommended thing to do?
-
- Veeam Software
- Posts: 21138
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Automatic proxy selection
Probably, this is due to some overhead that each backup job has in regards to connecting to vSphere, establishing the connection, etc. - this could take significant amount of time in some environments. Job log will tell what is exactly being performed during these "stuck" moments.Daveyd wrote:Each job is reporting the disk speeds at (0.0KB) 0.0KB read at 0.0KB(CBT). It was in this state for 15 minutes before throughput actually started and the speeds increased. Is it normal for the backups to to "stuck" reading the disk with no throughput for 15 minutes?
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Automatic proxy selection
I would also suggest to check ram consumption on both Veeam server and vCenter server, I've seen many situations where those two servers ended up beeing too loaded and thus slowing down every activity.
Luca.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
-
- Veteran
- Posts: 283
- Liked: 11 times
- Joined: May 20, 2010 4:17 pm
- Full Name: Dave DeLollis
- Contact:
Re: Automatic proxy selection
I'll check out the job logs and see what I can find.
Luca, The server has 8 cores and 8GB of RAM. While running the 4 concurrent jobs on the server, I rarely see CPU or memory utilization go above 50%
Luca, The server has 8 cores and 8GB of RAM. While running the 4 concurrent jobs on the server, I rarely see CPU or memory utilization go above 50%
-
- VeeaMVP
- Posts: 6165
- Liked: 1971 times
- Joined: Jul 26, 2009 3:39 pm
- Full Name: Luca Dell'Oca
- Location: Varese, Italy
- Contact:
Re: Automatic proxy selection
Ok, better for you. Mine was only another point in the checklist, in some environments Veeam and vCenter VMs need way more than 8 Gb ram to work properly.
Luca.
Luca.
Luca Dell'Oca
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Principal EMEA Cloud Architect @ Veeam Software
@dellock6
https://www.virtualtothecore.com/
vExpert 2011 -> 2022
Veeam VMCE #1
Who is online
Users browsing this forum: Google [Bot] and 124 guests