-
- Veteran
- Posts: 357
- Liked: 17 times
- Joined: Feb 13, 2009 10:13 am
- Full Name: Trevor Bell
- Location: Worcester UK
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
updated pm sent. i think it will take around 30-40 mins for the 8gig file..
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
This makes it 4-5 MB/s... about 10 times slower than expected on 1Gb LAN.
I am starting to suspect bad performance issues with ESX4 (unless this is storage or network related issue).
I am starting to suspect bad performance issues with ESX4 (unless this is storage or network related issue).
-
- Service Provider
- Posts: 77
- Liked: 15 times
- Joined: Jun 03, 2009 7:45 am
- Full Name: Lars O Helle
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
8 GB vmdk
I am now running some tests:
C:\vcb\vcbMounter.exe -h vcenter.xxxx.xx -u administrator -p xxxxxxx -a ipaddr:192.168.123.12 -r "j:\test" -t fullvm -m san -M 1 -F 1 -L 3
0 start - transferring logs etc
15 sec main vmdk (no network traffic, vmdk on destination grows)
210 sec main file - network traffic starts starts
350 sec complete
C:\vcb\vcbMounter.exe -h vcenter.xxxx.xx -u administrator -p xxxxxxx -a ipaddr:192.168.123.12 -r "j:\test2" -t fullvm -m san -L 3
0 start - transferring logs etc
15 sec main vmdk
150 sec complete
network backup pass 1
45 sec start vmdk
19 min 30 sec complete 7 MB/s
network backup pass 2
50 sec start vmdk
11 min 03 sec completed 13 MB/s
san backup pass 1
18 min 30 sec completed 7 MB/s
san backup pass 2
3 min 15 sec completed 44 MB/s (it is faster, but it takes some time before the "main file - the vmdk" is starting to transfer (logs etc). The speed for the main vmdk transfer was ca 60 MB/s
It looks like ESX network backup performance is bad (ESX service console issue?). There also seem to be a issue in the "work" between the VCB and the Veeam Agent, because the vcbmounter job outside of Veeam is much faster even with the -F 1 and -M 1 parameters.
I'm testing with a larger VM now (50GB+ VMDK, 50GB data)
I am now running some tests:
C:\vcb\vcbMounter.exe -h vcenter.xxxx.xx -u administrator -p xxxxxxx -a ipaddr:192.168.123.12 -r "j:\test" -t fullvm -m san -M 1 -F 1 -L 3
0 start - transferring logs etc
15 sec main vmdk (no network traffic, vmdk on destination grows)
210 sec main file - network traffic starts starts
350 sec complete
C:\vcb\vcbMounter.exe -h vcenter.xxxx.xx -u administrator -p xxxxxxx -a ipaddr:192.168.123.12 -r "j:\test2" -t fullvm -m san -L 3
0 start - transferring logs etc
15 sec main vmdk
150 sec complete
network backup pass 1
45 sec start vmdk
19 min 30 sec complete 7 MB/s
network backup pass 2
50 sec start vmdk
11 min 03 sec completed 13 MB/s
san backup pass 1
18 min 30 sec completed 7 MB/s
san backup pass 2
3 min 15 sec completed 44 MB/s (it is faster, but it takes some time before the "main file - the vmdk" is starting to transfer (logs etc). The speed for the main vmdk transfer was ca 60 MB/s
It looks like ESX network backup performance is bad (ESX service console issue?). There also seem to be a issue in the "work" between the VCB and the Veeam Agent, because the vcbmounter job outside of Veeam is much faster even with the -F 1 and -M 1 parameters.
I'm testing with a larger VM now (50GB+ VMDK, 50GB data)
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
This is particularly interesting result.lohelle wrote: san backup pass 1
18 min 30 sec completed 7 MB/s
san backup pass 2
3 min 15 sec completed 44 MB/s (it is faster, but it takes some time before the "main file - the vmdk" is starting to transfer (logs etc). The speed for the main vmdk transfer was ca 60 MB/s
Thing is, with Veeam Backup in VCB mode, whole VM is retrieved from storage to backup server in both initial and incremental passes - same vcbmounter.exe command line. So this specific operation (VM retrieval from storage) should happen with the same speed and take the same time between job runs.
Now, what really is different between initial (full) and incremental backup cycles, is the amount of data that needs to be piped through from Veeam Backup server to backup storage, and saved there. In case of initial (full) cycle, it is whole VM. In case of incremental cycle, it is changed blocks only. Thus, all signs point to issues with target storage speed (or connection to storage).
Based on this experiment, there is NO issues with VCB data retrieval speed (unless something happened with SAN storage between runs). The VM data retrieval speed by vcbmounter.exe was more than 60MB/s for that run.
Lars, does it make sense? Let me know and I can elaborate.
-
- Enthusiast
- Posts: 36
- Liked: 9 times
- Joined: May 28, 2009 7:52 pm
- Full Name: Steve Philp
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Just finished testing:Gostev wrote:Steve, Trevor - could you please perform the following quick test too, to check SAN-to-local transfer channel:
1. Open VMware Infrastructure Client on your Veeam Backup server.
2. Right-click your datastore, open the datastore browser.
3. Download a large test file from you storage to the local drive.
4. Note the time the operation took to calculate average transfer speed.
Hint: don't copy the same file more than once, or Windows file system cache will affect the results.
1.8GB ISO file downloaded from the SAN to the Veeam guest local drive took 90 seconds, giving 20MB/sec by my math.
-
- Enthusiast
- Posts: 36
- Liked: 9 times
- Joined: May 28, 2009 7:52 pm
- Full Name: Steve Philp
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
One thing that I'm wondering about after a long night of testing backups...
The initial Veeam Backup job we're seeing limited to 20-25MB/sec. The Service Console (at least in ESX 3.5) seems to have an acknowledged limit on network/disk speed. I'm guessing the same is true in ESX 4.0, thus the speed wall on the initial backup.
The part that's got me wondering is that we're seeing the SAME 20-25MB/sec on the incremental backups after the initial full.
During the testing last night, what I found was that during the initial backup, I'm seeing the vSphere charts for Disk and Service Console network interface flatten out around the 20-25MB/sec range. Doesn't really help in figuring out whether it's a disk or network throttle.
During the incremental backups, I still see 20-25MB/sec of disk activity, but I see nearly NO traffic on the Service Console interface.
I'm guessing that the dedupe stuff is being done on the Service Console itself and only necessary blocks are being transferred out to the Veeam guest. Is that correct?
If that's the case, it sounds like we're seeing a throttle being placed on SAN disk transfers to the Service Console that's holding everything up.
Solid logic? Missing something?
The initial Veeam Backup job we're seeing limited to 20-25MB/sec. The Service Console (at least in ESX 3.5) seems to have an acknowledged limit on network/disk speed. I'm guessing the same is true in ESX 4.0, thus the speed wall on the initial backup.
The part that's got me wondering is that we're seeing the SAME 20-25MB/sec on the incremental backups after the initial full.
During the testing last night, what I found was that during the initial backup, I'm seeing the vSphere charts for Disk and Service Console network interface flatten out around the 20-25MB/sec range. Doesn't really help in figuring out whether it's a disk or network throttle.
During the incremental backups, I still see 20-25MB/sec of disk activity, but I see nearly NO traffic on the Service Console interface.
I'm guessing that the dedupe stuff is being done on the Service Console itself and only necessary blocks are being transferred out to the Veeam guest. Is that correct?
If that's the case, it sounds like we're seeing a throttle being placed on SAN disk transfers to the Service Console that's holding everything up.
Solid logic? Missing something?
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
So sounds like we've managed to isolate this problem from Veeam Backup and confirm issue that this is the issue with ESX4? Because 20 MB/s is the same speed that you see with Veeam Backup engine on stuffed VMDk files. Yes, Veeam Backup will work faster transfering VMDKs with lotsa white space (as you've seen in previous testing where you've obtained 34MB/s speed), but it cannot work faster than 20MB/s for VMDK files without white space (like ISO image files).sphilp wrote:1.8GB ISO file downloaded from the SAN to the Veeam guest local drive took 90 seconds, giving 20MB/sec by my math.
[UPDATE] I've just seen your other post, give me a sec to read it through and answer it.
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
This indicates that bottleneck is data retrieval speed from storage through your ESX host. Remember that during incremental backup cycle, whole VM needs to be retrieved to be analyzed by service console agent for changed blocks.sphilp wrote:One thing that I'm wondering about after a long night of testing backups...
The initial Veeam Backup job we're seeing limited to 20-25MB/sec. The Service Console (at least in ESX 3.5) seems to have an acknowledged limit on network/disk speed. I'm guessing the same is true in ESX 4.0, thus the speed wall on the initial backup.
The part that's got me wondering is that we're seeing the SAME 20-25MB/sec on the incremental backups after the initial full.
NB: we are actually removing this requirement to pull whole VM for change analysis in Veeam Backup 4.0 - some new APIs allow for finding out changed blocks withot analyzing whole image (but this will work only with ESX4 hosts).
Most likely it is your storage and/or its connectivity, because people reported much faster speed with ESXi4 RC and local storage. Veeam "Agentless mode" is the same API as when you copy file with VIC Datastore Browser.sphilp wrote:During the testing last night, what I found was that during the initial backup, I'm seeing the vSphere charts for Disk and Service Console network interface flatten out around the 20-25MB/sec range. Doesn't really help in figuring out whether it's a disk or network throttle.
Correct!sphilp wrote:During the incremental backups, I still see 20-25MB/sec of disk activity, but I see nearly NO traffic on the Service Console interface. I'm guessing that the dedupe stuff is being done on the Service Console itself and only necessary blocks are being transferred out to the Veeam guest. Is that correct?
Based on your Datastore Browser copy speed test, I would say it is your VM storage speed and/or it connectivity. And if you are sure that you've seen higher speed before, there could be some change in ESX4 that reduced the speed with which ESX4 is able to pull the data from your storage.sphilp wrote:If that's the case, it sounds like we're seeing a throttle being placed on SAN disk transfers to the Service Console that's holding everything up.
Solid logic? Missing something?
-
- Service Provider
- Posts: 77
- Liked: 15 times
- Joined: Jun 03, 2009 7:45 am
- Full Name: Lars O Helle
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
OK. I found something VERY strange! Keep in mind that this backup store is fast for vcbmounter backups outside Veeam.
I tried backup in Veeam (San mode) to a different RAID on the same server (based on a el cheapo Sil3114R card). Backups are now a lot faster. The first pass is now running, and I see speeds around brtween 35 - 40 Mb/s
The strange thing is that vcbmounter backup is fast on both the Sil3114R raid and the LSI SAS RAID (the main backup store)
So it seems to be a local problem after all. But why is Veeam the only thing running slowly on this store??
Copy from and to the store show speeds from 100 - 150 MB/s between the raid arrays.
I tried backup in Veeam (San mode) to a different RAID on the same server (based on a el cheapo Sil3114R card). Backups are now a lot faster. The first pass is now running, and I see speeds around brtween 35 - 40 Mb/s
The strange thing is that vcbmounter backup is fast on both the Sil3114R raid and the LSI SAS RAID (the main backup store)
So it seems to be a local problem after all. But why is Veeam the only thing running slowly on this store??
Copy from and to the store show speeds from 100 - 150 MB/s between the raid arrays.
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Lars, thing is - we are not just copying one large file. Since we use deduplication, our VBK is actually a SIS (single instance storage) with TOC, block hashes, index and references to data blocks. All these values are getting updated as data blocks are written to the storage. So in case of bad I/O controller board or firmware, or bad cache settings, you can get a few times slower target storage performance, like in this example for instance.lohelle wrote:OK. I found something VERY strange! Keep in mind that this backup store is fast for vcbmounter backups outside Veeam.
I tried backup in Veeam (San mode) to a different RAID on the same server (based on a el cheapo Sil3114R card). Backups are now a lot faster. The first pass is now running, and I see speeds around brtween 35 - 40 Mb/s
The strange thing is that vcbmounter backup is fast on both the Sil3114R raid and the LSI SAS RAID (the main backup store)
So it seems to be a local problem after all. But why is Veeam the only thing running slowly on this store??
Copy from and to the store show speeds from 100 - 150 MB/s between the raid arrays.
-
- Enthusiast
- Posts: 36
- Liked: 9 times
- Joined: May 28, 2009 7:52 pm
- Full Name: Steve Philp
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
sphilp wrote:If that's the case, it sounds like we're seeing a throttle being placed on SAN disk transfers to the Service Console that's holding everything up.
Solid logic? Missing something?
Yeah, it's a convenient theory, but it doesn't seem to bear out in reality... As a test, I logged onto one of the guests that is hosted on that SAN. The guest has a 2GB file that I copied to the same backup storage target used by Veeam. The transfer occurred at 85MB/sec. That certainly doesn't sound like it's a SAN connectivity problem!Gostev wrote:Based on your Datastore Browser copy speed test, I would say it is your VM storage speed and/or it connectivity. And if you are sure that you've seen higher speed before, there could be some change in ESX4 that reduced the speed with which ESX4 is able to pull the data from your storage.
Next idea?
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
This speed difference means that data path is different in this case - could it be some change or bug with ESX4? Because both in case of Datastore Browser copy, and copying the file from within guest, all I/O goes through ESX I/O stack. However, in one case data transfer speed is 20MB/s, and in other case the speed is 85MB/sec. And I don't have an explanation for this at the moment...sphilp wrote:Yeah, it's a convenient theory, but it doesn't seem to bear out in reality... As a test, I logged onto one of the guests that is hosted on that SAN. The guest has a 2GB file that I copied to the same backup storage target used by Veeam. The transfer occurred at 85MB/sec. That certainly doesn't sound like it's a SAN connectivity problem! Next idea?
-
- Enthusiast
- Posts: 36
- Liked: 9 times
- Joined: May 28, 2009 7:52 pm
- Full Name: Steve Philp
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Well, for one, the service console isn't in the way...
-
- Service Provider
- Posts: 77
- Liked: 15 times
- Joined: Jun 03, 2009 7:45 am
- Full Name: Lars O Helle
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
It must be something with the way Veeam reads and writes to the array that the raid controller/raid driver/raid cache does not handle well. This is a LSI SAS RAID adapter with 8x 1,5TB SATA disks. The "new" test was against a single 1TB SATA drive connected to the Sil3114R controller.
Iæm gong to check raid controller cache settings etc.
Iæm gong to check raid controller cache settings etc.
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Correct, but it has always been in the way for the other one (in ESX 3.5, where you've seen much better performance). BTW, just in case if upgrade path messed up some settings, could you check your service console and vSwitch configurations for the traffic shaping setting?sphilp wrote:Well, for one, the service console isn't in the way...
-
- Enthusiast
- Posts: 36
- Liked: 9 times
- Joined: May 28, 2009 7:52 pm
- Full Name: Steve Philp
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Traffic shaping is grayed out. I did test with the chechbox turned on and "Disabled" selected, no change in speed.Gostev wrote: Correct, but it has always been in the way for the other one (in ESX 3.5, where you've seen much better performance). BTW, just in case if upgrade path messed up some settings, could you check your service console and vSwitch configurations for the traffic shaping setting?
My belief is that the ESX 3.5 Service Console only throttled the Network transfer speed. Is that correct?
Is it possible that ESX 4.0's Service Console is throttling the Disk transfer speed?
It sounds like a plausible theory given that we're seeing no change in backup speeds between the initial full and the follow-on incremental. It seems every time we try transferring through the Service Console, we're being throttled to 20-25MB/s.
Guest networks don't seem to be under the same speed limit. Copying files from or to a guest shows expected wire speeds. Copying between guests gets us expected speeds. It seems to be JUST when we're going through the Service Console that things are slowed down.
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
I'll try to do some testing to verify this, and get back to you with the resuts.
-
- Service Provider
- Posts: 77
- Liked: 15 times
- Joined: Jun 03, 2009 7:45 am
- Full Name: Lars O Helle
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Just wanted to report that I have changed the RAID card, and the performance is OK with VCB now.
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Lars, thanks for letting us know.
-
- Enthusiast
- Posts: 36
- Liked: 9 times
- Joined: May 28, 2009 7:52 pm
- Full Name: Steve Philp
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Any updates? We'd really like to get this backup speed issue resolved...
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
I should have an update tomorrow. FC4 lab I planned to use for performance testing had some issue with storage, so we had to spend couple of days restoring storage on my good old performance testing host. By now, I've finished all testing on ESX 3.5 U3 and it looks good (could not reach a cap for either upload or download to/from ESX 3.5). ESX 4.0 will be installed on the very same host tomorrow, and I will repeat the testing.
Some info on my lab:
- ESX 3.5 U3 with RAID0 local storage on couple of modern hard drives (time cat to /dev/null shows 107MB/s download speed).
- Debian Linux with modern hard drive formatted ext2 for best performance (time cat to /dev/null shows 54MB/s download speed).
Test files: 4GB in size with randomly generated content (to prevent Veeam Backup engine from doing empty block removal and traffic compression, thus affecting results). I am using 5 different files to ensure that file system cache on Linux server does not affect the results.
Some ESX 3.5 U3 metrics:
File download (equals to backup) from ESX to Linux server: 40-50MB/s (caps at target disk write speed)
File upload (equals to restore) from Linux server to ESX: 50-55MB/s (caps at source disk read speed)
Some info on my lab:
- ESX 3.5 U3 with RAID0 local storage on couple of modern hard drives (time cat to /dev/null shows 107MB/s download speed).
- Debian Linux with modern hard drive formatted ext2 for best performance (time cat to /dev/null shows 54MB/s download speed).
Test files: 4GB in size with randomly generated content (to prevent Veeam Backup engine from doing empty block removal and traffic compression, thus affecting results). I am using 5 different files to ensure that file system cache on Linux server does not affect the results.
Some ESX 3.5 U3 metrics:
File download (equals to backup) from ESX to Linux server: 40-50MB/s (caps at target disk write speed)
File upload (equals to restore) from Linux server to ESX: 50-55MB/s (caps at source disk read speed)
-
- Enthusiast
- Posts: 36
- Liked: 9 times
- Joined: May 28, 2009 7:52 pm
- Full Name: Steve Philp
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Sounds good. Thanks!
-
- Novice
- Posts: 3
- Liked: never
- Joined: May 04, 2009 10:55 pm
- Full Name: RL
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Thanks for all the good information everyone, I am seeing this issue as well immediately after upgrading to vSphere 4.0. My backups are running to NFS exports using the service console agent. Since then, upgrading to Veeam Backup 3.1 hasn't improved the performance.
Total size of VMs to backup: 653.03 GB
Processed size: 653.03 GB
Avg. performance rate: 48 MB/s
Start time: 5/29/2009 11:00:24 PM
End time: 5/30/2009 2:54:25 AM
Total size of VMs to backup: 653.03 GB
Processed size: 653.03 GB
Avg. performance rate: 22 MB/s
Start time: 6/1/2009 11:00:18 PM
End time: 6/2/2009 7:36:11 AM
Total size of VMs to backup: 653.03 GB
Processed size: 653.03 GB
Avg. performance rate: 48 MB/s
Start time: 5/29/2009 11:00:24 PM
End time: 5/30/2009 2:54:25 AM
Total size of VMs to backup: 653.03 GB
Processed size: 653.03 GB
Avg. performance rate: 22 MB/s
Start time: 6/1/2009 11:00:18 PM
End time: 6/2/2009 7:36:11 AM
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
First tests showed that things are not looking good for ESX4... did not get to the actual Veeam Backup testing, but time cat to /dev/null shows almost 5 times slower disk read speed than with ESX 3.5 on the same host. Disk perfromance graphs in VIC confirm these results. I will be doing more tests now, and then provide more information and actual numbers.
-
- Service Provider
- Posts: 77
- Liked: 15 times
- Joined: Jun 03, 2009 7:45 am
- Full Name: Lars O Helle
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Thank you for the feedback!
I would actually prefer to run service console agents as I then can max out my SAN (3-400 MB/s total), and the jobs would complete within a small backup window at night. I have powerful 16-core servers, so impact to VM's would not be a problem. Testing with vRanger Pro (on ESX 3.5) I was able to do that. But I hated the GUI and schedule part.
I hope you/Vmware figure this out.
I would actually prefer to run service console agents as I then can max out my SAN (3-400 MB/s total), and the jobs would complete within a small backup window at night. I have powerful 16-core servers, so impact to VM's would not be a problem. Testing with vRanger Pro (on ESX 3.5) I was able to do that. But I hated the GUI and schedule part.
I hope you/Vmware figure this out.
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Lars, I agree. I know many of our customer used to do exactly this with Veeam Backup and ESX 3.5... so I hope this gets addressed.
-
- Enthusiast
- Posts: 36
- Liked: 9 times
- Joined: May 28, 2009 7:52 pm
- Full Name: Steve Philp
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Gostev - From the sounds of your last message, it sounds like we're now waiting for VMware? Or do you think there's a fix/workaround possible in Backup 3.1?
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Steve, no at the moment I am performing some final tests and consolidating all the information, I will then distribute this information including VMware. There is no way to fix this from backup application side, because it is service console disk read that is capped. I hope this is a bug and not intentional.
-
- Service Provider
- Posts: 77
- Liked: 15 times
- Joined: Jun 03, 2009 7:45 am
- Full Name: Lars O Helle
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
Gostev: I see that during backup using service console agent there are 2000-2500 cmds/s read on the swiscsi adapter, but only 15-17 MB/s read. CPU load in the service console is also rather high I think (for running the second "differential" backup, not the first)
Can this high IO have something to do with the problem? Is the packet size very low for some reason, or is this normal?
Can this high IO have something to do with the problem? Is the packet size very low for some reason, or is this normal?
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Stumped by backup speed change after upgrading to vSphere
I've posted all the information I've gathered so far on this issue in my blog at http://www.vnotion.com/?p=38
Who is online
Users browsing this forum: Majestic-12 [Bot], mwisniewski, nathang_pid, Semrush [Bot] and 63 guests