Backup proxy as VM or Not
Hi,
Just a small Q, what is faster, an additional VM acting as proxy or leave the work proxy on the fysical machine running the jobs?
With regards,
Lex
Just a small Q, what is faster, an additional VM acting as proxy or leave the work proxy on the fysical machine running the jobs?
With regards,
Lex
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup proxy as VM or Not
Hi, depends on the processing mode backup proxy running on that physical machine uses. See sticky FAQ topic for more info. Thanks!
Re: Backup proxy as VM or Not
Hi,
That FAQ is nice but mostly 6 and 7 based. And backup mode proxy (all is set to automatic mode).
But still the Q remains:
1 server with vmware and direct attached disks. A fysical server with NAS as backup storage (1Gb LAN). Would a vm with proxy make it any faster?
That FAQ is nice but mostly 6 and 7 based. And backup mode proxy (all is set to automatic mode).
But still the Q remains:
1 server with vmware and direct attached disks. A fysical server with NAS as backup storage (1Gb LAN). Would a vm with proxy make it any faster?
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup proxy as VM or Not
Actually, nothing has changed in v8 comparing to all the previous versions in regards to VMware processing modes. So please, read that topic carefully as it specifically ranks processing modes from fastest to slowest depending on the network speed and the type of primary storage.
Automatic mode selection does not provide any additional information. You need to determine what effective mode is used by the proxy to retrieve data from virtual disks (see the job log). Based on what you have said so far, I am guessing it is going to be NBD, in which case 2nd question of the Network Mode FAQ chapter covers your exact scenario and should fully answer your question. Thanks!
Automatic mode selection does not provide any additional information. You need to determine what effective mode is used by the proxy to retrieve data from virtual disks (see the job log). Based on what you have said so far, I am guessing it is going to be NBD, in which case 2nd question of the Network Mode FAQ chapter covers your exact scenario and should fully answer your question. Thanks!
Re: Backup proxy as VM or Not
Done this test, it is even slower then the backup proxy on the fysical machine. Virtual proxy 34Mb versus fysical proxy 90Mb.
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup proxy as VM or Not
This result would normally indicate a major problem with your vSphere setup, because this essentially means that your VMs cannot read data from local storage any faster than 34 MB/s, which is obviously a problem.
Unless the bottleneck was elsewhere in the backup infrastructure for this job run - have you reviewed the bottleneck stats in the job log?
Or, perhaps the comparison was not "clean" (for example, hosts' storage was too busy serving other VMs during your test, while previously backups were performed during "quiet" hours).
Unless the bottleneck was elsewhere in the backup infrastructure for this job run - have you reviewed the bottleneck stats in the job log?
Or, perhaps the comparison was not "clean" (for example, hosts' storage was too busy serving other VMs during your test, while previously backups were performed during "quiet" hours).
Re: Backup proxy as VM or Not
The bottleneck is different per vm, some reporting source, some reporting target and some reporting network. All on one and the same server (6 VM's).
All the windows ntfs volumes are backupped thru CBT
All the windows ntfs volumes are backupped thru CBT
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup proxy as VM or Not
Fluctuating bottleneck means both your storage and network are running at full I/O capacity, so a reduction in load of either component makes another immediately become a bottleneck (with overall performance remaining very slow). This means I was most likely right, and your test result was impacted by other VMs generating heavy load on your storage during your test.
How many hard drives do you have in the host and what are they (RPM, interface, RAID level)?
Also, do you literally have a single 1Gb link to host? Or a few 1Gb links teamed?
How many hard drives do you have in the host and what are they (RPM, interface, RAID level)?
Also, do you literally have a single 1Gb link to host? Or a few 1Gb links teamed?
Re: Backup proxy as VM or Not
Single nic at the moment, as for the drives, all 10000rpm and SAS 6GB:
Code: Select all
Smart Array P420 in Slot 9
Bus Interface: PCI
Slot: 9
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 5.22
Rebuild Priority: Low
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: OK
Cache Ratio: 50% Read / 50% Write
Drive Write Cache: Enabled
Total Cache Size: 1024 MB
Total Cache Memory Available: 816 MB
No-Battery Write Cache: Disabled
Cache Backup Power Source: Capacitors
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 88
Cache Module Temperature (C): 46
Capacitor Temperature (C): 20
Number of Ports: 2 Internal only
Driver Name: hpsa
Driver Version: 5.5.0.58-1OEM
Driver Supports HP SSD Smart Path: False
Logical Drive: 1
Size: 1.1 TB
Fault Tolerance: 1
Heads: 255
Sectors Per Track: 32
Cylinders: 65535
Strip Size: 256 KB
Full Stripe Size: 256 KB
Status: OK
Caching: Enabled
Unique Identifier: 600508B1001CC5CFA4C55FD3D9B61375
Disk Name: vmhba3:C0:T0:L0
Mount Points: None
Logical Drive Label: A4F08AC2PDSXK0BRH5T0TZ3FBF
Mirror Group 0:
physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS, 1200.2 GB, OK)
Mirror Group 1:
physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS, 1200.2 GB, OK)
Drive Type: Data
LD Acceleration Method: Controller Cache
Logical Drive: 2
Size: 1.4 TB
Fault Tolerance: 5
Heads: 255
Sectors Per Track: 32
Cylinders: 65535
Strip Size: 256 KB
Full Stripe Size: 1280 KB
Status: OK
Caching: Enabled
Parity Initialization Status: Initialization Completed
Unique Identifier: 600508B1001C60936E02EC9B5E3C6D79
Disk Name: vmhba3:C0:T0:L1
Mount Points: None
Logical Drive Label: 05E8EEB7PDSXK0BRH5T0TZ10B9
Drive Type: Data
LD Acceleration Method: Controller Cache
Smart Array P420i in Slot 0 (Embedded)
Bus Interface: PCI
Slot: 0
RAID 6 (ADG) Status: Enabled
Controller Status: OK
Hardware Revision: B
Firmware Version: 5.22
Rebuild Priority: Low
Expand Priority: Medium
Surface Scan Delay: 3 secs
Surface Scan Mode: Idle
Queue Depth: Automatic
Monitor and Performance Delay: 60 min
Elevator Sort: Enabled
Degraded Performance Optimization: Disabled
Inconsistency Repair Policy: Disabled
Wait for Cache Room: Disabled
Surface Analysis Inconsistency Notification: Disabled
Post Prompt Timeout: 15 secs
Cache Board Present: True
Cache Status: OK
Cache Ratio: 50% Read / 50% Write
Drive Write Cache: Enabled
Total Cache Size: 512 MB
Total Cache Memory Available: 304 MB
No-Battery Write Cache: Disabled
Cache Backup Power Source: Capacitors
Battery/Capacitor Count: 1
Battery/Capacitor Status: OK
SATA NCQ Supported: True
Spare Activation Mode: Activate on physical drive failure (default)
Controller Temperature (C): 78
Cache Module Temperature (C): 37
Capacitor Temperature (C): 27
Number of Ports: 2 Internal only
Driver Name: hpsa
Driver Version: 5.5.0.58-1OEM
Driver Supports HP SSD Smart Path: False
Logical Drive: 1
Size: 1.9 TB
Fault Tolerance: 5
Heads: 255
Sectors Per Track: 32
Cylinders: 65535
Strip Size: 256 KB
Full Stripe Size: 1792 KB
Status: OK
Caching: Enabled
Parity Initialization Status: Initialization Completed
Unique Identifier: 600508B1001C83390BC7A9AE3672947C
Disk Name: vmhba0:C0:T0:L0
Mount Points: None
Logical Drive Label: A4F08BBA001438027FEC170CEE9
Drive Type: Data
LD Acceleration Method: Controller Cache
-
- Chief Product Officer
- Posts: 31806
- Liked: 7300 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Backup proxy as VM or Not
IOPS capacity of six 10K spindles on RAID6 (with its huge I/O penalty) can be easily saturated by a few busy VMs indeed.
If you are interested in performing a "clean" test, do a full backup with all 6 VMs shutdown (so that both storage and network are not doing anything else), in this case you will see that VM backup proxy using HOT ADD will provide better results than physical backup proxy using NBD.
But in your case, efficient data retrieval does not really matter with the entire environment running at full I/O capacity already (as fluctuating bottleneck indicates). So, I'd say just keep everything deployed the way it is now. I assume you do not have any issues meeting your backup window today with just 6 VMs to backup?
If you are interested in performing a "clean" test, do a full backup with all 6 VMs shutdown (so that both storage and network are not doing anything else), in this case you will see that VM backup proxy using HOT ADD will provide better results than physical backup proxy using NBD.
But in your case, efficient data retrieval does not really matter with the entire environment running at full I/O capacity already (as fluctuating bottleneck indicates). So, I'd say just keep everything deployed the way it is now. I assume you do not have any issues meeting your backup window today with just 6 VMs to backup?
Re: Backup proxy as VM or Not
No, indeed, backup and replication window is sufficent. But for future reference it is always nice to know what to improve in basic design. This site was handed over and not designed by me.
Thanks so far.
Thanks so far.
Who is online
Users browsing this forum: No registered users and 53 guests