Host-based backup of Microsoft Hyper-V VMs.
Post Reply
pterpumpkin
Enthusiast
Posts: 36
Liked: 4 times
Joined: Jun 14, 2016 9:36 am
Full Name: Pter Pumpkin
Contact:

CSV IO high latency/queues when running backups

Post by pterpumpkin »

Hi all!

We have recently discovered an issue where guest VM's on our 2012 R2 cluster are suffering from IO latency/queues when Veeam snapshots the CSV that they're on.

We have been using the specific Windows Event Viewer ID 833 on a SQL server that has been impacted by this issue as reference.

The below log excerpt is taken from the cluster logs after a Veeam backup job has bee initiated:

Code: Select all

00000a24.00000c8c::2016/12/08-08:06:19.526 INFO  [DCM] BeginPrepareSnapshot: set snapshot state for Volume16:0467edbf-2759-41da-b9e1-5227480ff6f7
00000a24.00000c8c::2016/12/08-08:06:19.573 INFO  [DCM] DeleteProviderInterface()
00000a24.00000c8c::2016/12/08-08:06:19.604 INFO  [DCM] CreateProviderInterface(): HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:19.604 INFO  [DCM] SetContext(exit): context VSS_CTX_CLIENT_ACCESSIBLE, HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:19.604 INFO  [DCM] Provider::BeginPrepareSnapshot(enter): volume \\?\GLOBALROOT\Device\Harddisk15\ClusterPartition2\ context VSS_CTX_CLIENT_ACCESSIBLE
00000a24.00000c8c::2016/12/08-08:06:19.604 INFO  [DCM] Provider::BeginPrepareSnapshot(exit): volume \\?\GLOBALROOT\Device\Harddisk15\ClusterPartition2\, HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:19.604 INFO  [DCM] InsertIntoSnapshotMap(): snapshotId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d<-->00000000-0000-0000-0000-000000000000, snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, context VSS_CTX_BACKUP, requester 12, pseudoVolume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\, realVolume \\?\GLOBALROOT\Device\Harddisk15\ClusterPartition2\, length 0, snapshotVolume \\?\Volume{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}, resourceId 1d554cbe-1803-4de9-a26a-ff6185924810, collection size 1
00000a24.000026c0::2016/12/08-08:06:20.135 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.000026c0::2016/12/08-08:06:20.135 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event EndPrepareSnapshots
00000a24.000026c0::2016/12/08-08:06:20.135 INFO  [DCM] CreateSnapshotSet(enter): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, event EndPrepareSnapshots
00000a24.000026c0::2016/12/08-08:06:20.135 INFO  [DCM] Provider::CreateSnapshotSet(enter): Event EndPrepareSnapshots, SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2
00000a24.00000c8c::2016/12/08-08:06:21.901 INFO  [RCM [RES] Virtual Machine {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
0000044c.00002344::2016/12/08-08:06:22.526 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002344::2016/12/08-08:06:22.526 INFO  [CAM] Token Created, Client Handle: 80002b78
0000044c.000024d0::2016/12/08-08:06:22.526 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.000024d0::2016/12/08-08:06:22.526 INFO  [CAM] Token Created, Client Handle: 80003aa4
00000a24.000026c0::2016/12/08-08:06:23.026 INFO  [DCM] Provider::CreateSnapshotSet(exit): SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2 HrError(0x00000000)
00000a24.000026c0::2016/12/08-08:06:23.026 INFO  [DCM] CreateSnapshotSet(exit): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, HrError(0x00000000)
00000a24.000026c0::2016/12/08-08:06:23.510 INFO  [RCM [RES] {VM NAME} (1) embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
0000044c.000024d0::2016/12/08-08:06:23.760 INFO  [CAM] Token Created, Client Handle: 8001b598
00000a24.00000c8c::2016/12/08-08:06:23.791 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00000c8c::2016/12/08-08:06:23.791 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event PreCommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:23.791 INFO  [DCM] CreateSnapshotSet(enter): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, event PreCommitSnapshots
00000a24.00000e00::2016/12/08-08:06:23.791 INFO  [DCM] SetVolumeSnapshotState: volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\, state PreCommit, disableCOW true, instanceId 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2:12
00000a24.00000e00::2016/12/08-08:06:23.791 INFO  [DCM] processing SetSnapshotState: resource 1d554cbe-1803-4de9-a26a-ff6185924810, volume 1d554cbe-1803-4de9-a26a-ff6185924810:135266304, uniqueId 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2:12
00000a24.00000e00::2016/12/08-08:06:23.791 INFO  [DCM] Push.AsyncHandleSnapshot for 1d554cbe-1803-4de9-a26a-ff6185924810
00000a24.000029c4::2016/12/08-08:06:23.791 INFO  [DCM] SyncHandler for san10_vol16: action dcm/setSnapshotState, sequence 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2:12 ({HOSTNAME})
00000a24.000029c4::2016/12/08-08:06:23.791 INFO  [DCM] SetSnapshotStateHandler(enter): volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\ state PreCommit
00000a24.00000fec::2016/12/08-08:06:23.807 INFO  [DCM] CsvFs Listener: snapshot [volume 0467edbf-2759-41da-b9e1-5227480ff6f7, state CsvFsSnapshotStatePrepare, sequence 15308]
00000a24.000029c4::2016/12/08-08:06:23.807 INFO  [DCM] SetSnapshotStateHandler: volume Volume16:0467edbf-2759-41da-b9e1-5227480ff6f7, redirect count 1, status 0
00000a24.00000c8c::2016/12/08-08:06:23.823 INFO  [DCM] Provider::CreateSnapshotSet(enter): Event PreCommitSnapshots, SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2
00000a24.00000c8c::2016/12/08-08:06:23.823 INFO  [DCM] Provider::CreateSnapshotSet(exit): SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2 HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:23.823 INFO  [DCM] CreateSnapshotSet(exit): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, HrError(0x00000000)
00000a24.00000ff8::2016/12/08-08:06:23.838 INFO  [DCM] Filter message received: snapshotSet [8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, 1]
00000a24.00000ff8::2016/12/08-08:06:23.838 INFO  [DCM] LockVolumesOnSrvPreFlushAndHold: snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, volume count 1, status 0, handle HDL( ffffe0019854f4f0 )
00000a24.00000ff8::2016/12/08-08:06:23.838 INFO  [DCM] SnapshotAgent::GetSnapshotSetInfoHandler(exit): snapshotSet [8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, 1]
00000a24.00000ff8::2016/12/08-08:06:23.838 INFO  [DCM] Filter reply sent: HrError(0x00000000) snapshotSet [8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, 1]
00000a24.00000c8c::2016/12/08-08:06:23.838 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00000c8c::2016/12/08-08:06:23.838 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event CommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:23.838 INFO  [DCM] CreateSnapshotSet(enter): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, event CommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:23.838 INFO  [DCM] Provider::CreateSnapshotSet(enter): Event CommitSnapshots, SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2
00000a24.00000c8c::2016/12/08-08:06:23.838 INFO  [DCM] Provider::CreateSnapshotSet(exit): SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2 HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:23.838 INFO  [DCM] CreateSnapshotSet(exit): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event PostCommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] CreateSnapshotSet(enter): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, event PostCommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] UnlockVolumesOnSrvPostRelease: snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, status 0, handle HDL( ffffe0019854f4f0 )
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] Filter.CsvFltMessageTypePostSnapshotCreate(LookupByName false, Device 15, Partition 2, Name )
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] Provider::CreateSnapshotSet(enter): Event PostCommitSnapshots, SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] Provider::CreateSnapshotSet(exit): SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2 HrError(0x00000000)
00000a24.00001a94::2016/12/08-08:06:24.604 INFO  [GEM] Node 14: Deleting [6:57513 , 6:57513] (both included) as it has been ack'd by every node
00000a24.00000f4c::2016/12/08-08:06:24.619 INFO  [GEM] Node 14: Deleting [8:29749 , 8:29749] (both included) as it has been ack'd by every node
00000a24.000014ac::2016/12/08-08:06:24.635 INFO  [GEM] Node 14: Deleting [9:5160 , 9:5160] (both included) as it has been ack'd by every node
00000a24.00000f54::2016/12/08-08:06:24.635 INFO  [GEM] Node 14: Deleting [7:484 , 7:484] (both included) as it has been ack'd by every node
00000a24.00000f04::2016/12/08-08:06:24.635 INFO  [GEM] Node 14: Deleting [10:2212 , 10:2213] (both included) as it has been ack'd by every node
00000a24.00000e70::2016/12/08-08:06:24.651 INFO  [GEM] Node 14: Deleting [4:4159 , 4:4160] (both included) as it has been ack'd by every node
00000a24.00000e88::2016/12/08-08:06:24.651 INFO  [GEM] Node 14: Deleting [3:27247 , 3:27247] (both included) as it has been ack'd by every node
00000a24.00000e6c::2016/12/08-08:06:24.666 INFO  [GEM] Node 14: Deleting [1:59139 , 1:59140] (both included) as it has been ack'd by every node
00000a24.00002a04::2016/12/08-08:06:24.666 INFO  [GEM] Node 14: Deleting [12:3048 , 12:3049] (both included) as it has been ack'd by every node
00000a24.00000e74::2016/12/08-08:06:24.666 INFO  [GEM] Node 14: Deleting [11:40966 , 11:40967] (both included) as it has been ack'd by every node
00000a24.00000f10::2016/12/08-08:06:24.682 INFO  [GEM] Node 14: Deleting [2:63726 , 2:63727] (both included) as it has been ack'd by every node
00000a24.00000ea4::2016/12/08-08:06:24.713 INFO  [GEM] Node 14: Deleting [5:4315 , 5:4316] (both included) as it has been ack'd by every node
00000a24.00000e8c::2016/12/08-08:06:24.744 INFO  [GEM] Node 14: Deleting [13:33614 , 13:33615] (both included) as it has been ack'd by every node
00000a24.00000e00::2016/12/08-08:06:25.182 INFO  [DCM] SetVolumeSnapshotState: volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\, state PostCommit, disableCOW false, instanceId 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2:12
00000a24.00000e00::2016/12/08-08:06:25.182 INFO  [DCM] processing SetSnapshotState: resource 1d554cbe-1803-4de9-a26a-ff6185924810, volume 1d554cbe-1803-4de9-a26a-ff6185924810:135266304, uniqueId 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2:12
00000a24.00000e00::2016/12/08-08:06:25.182 INFO  [DCM] Push.AsyncHandleSnapshot for 1d554cbe-1803-4de9-a26a-ff6185924810
00000a24.000029c4::2016/12/08-08:06:25.182 INFO  [DCM] SyncHandler for san10_vol16: action dcm/setSnapshotState, sequence 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2:12 ({HOSTNAME})
00000a24.000029c4::2016/12/08-08:06:25.182 INFO  [DCM] SetSnapshotStateHandler(enter): volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\ state PostCommit
00000a24.00000fec::2016/12/08-08:06:25.213 INFO  [DCM] CsvFs Listener: snapshot [volume 0467edbf-2759-41da-b9e1-5227480ff6f7, state CsvFsSnapshotStateCommit, sequence 15309]
00000a24.000029c4::2016/12/08-08:06:25.213 INFO  [DCM] SetSnapshotStateHandler: volume Volume16:0467edbf-2759-41da-b9e1-5227480ff6f7, redirect count 0, status 0
00000a24.000026c0::2016/12/08-08:06:26.651 INFO  [RCM [RES] {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.00001a90::2016/12/08-08:06:26.651 INFO  [RCM [RES] Virtual Machine {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.00001ab0::2016/12/08-08:06:26.651 INFO  [RCM [RES] Virtual Machine {VM NAME} Resources embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.000026f4::2016/12/08-08:06:29.260 INFO  [RCM [RES] SCVMM {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.00000c8c::2016/12/08-08:06:31.776 INFO  [DCM] UpdateDatabaseForSnapshotSet: resource san10_vol16
00000a24.00000c8c::2016/12/08-08:06:31.776 INFO  [GUM] Node 14: Processing RequestLock 14:772
00000a24.00000ea4::2016/12/08-08:06:31.776 INFO  [GUM] Node 14: Processing GrantLock to 14 (sent by 5 gumid: 972499)
00000a24.00000c8c::2016/12/08-08:06:31.776 INFO  [GUM] Node 14: executing request locally, gumId:972500, my action: snap/addSnapshotInfo, # of updates: 1
00000a24.00000c8c::2016/12/08-08:06:31.776 INFO  [DCM] GumAddSnapshotInfo: payload size 96
00000a24.00000c8c::2016/12/08-08:06:31.776 INFO  [DCM] db.CreateDcmSnapshotVolume snapshot <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d on volume Volume16:0467edbf-2759-41da-b9e1-5227480ff6f7
00000a24.00000c8c::2016/12/08-08:06:31.776 INFO  [DCM] RefreshSnapshotMap(DcmDb insert): volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000c8c::2016/12/08-08:06:31.791 INFO  [DCM] PostCommitSnapshots(exit): 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:31.791 INFO  [DCM] CreateSnapshotSet(exit): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:31.807 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00000c8c::2016/12/08-08:06:31.807 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event PreFinalCommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:31.807 INFO  [DCM] CreateSnapshotSet(enter): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, event PreFinalCommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:31.807 INFO  [DCM] Provider::CreateSnapshotSet(enter): Event PreFinalCommitSnapshots, SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2
00000a24.00000c8c::2016/12/08-08:06:31.838 INFO  [DCM] Provider::CreateSnapshotSet(exit): SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2 HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:31.838 INFO  [DCM] SetSnapshotAutoRecoverAttribute: device \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, readOnly SetSnapshotReadOnly::No
00000a24.00000c8c::2016/12/08-08:06:31.838 INFO  [DCM] SetSnapshotSetAutoRecovery: snapshot 2e6f6710-bcaf-11e6-80d5-98be943702d0, HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:31.838 INFO  [DCM] Sending revision list. Message id 14:10
00000a24.00000c8c::2016/12/08-08:06:31.854 INFO  [DCM] Sending revision list completed on all nodes. Message id 14:10
00000a24.00000c8c::2016/12/08-08:06:31.854 INFO  [DCM] GetSnapshotDeviceName: device name \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00000c8c::2016/12/08-08:06:32.104 INFO  [DCM] FilterAttach hr HrError(0x00000000) device \Device\HarddiskVolumeShadowCopy4
00000a24.00000c8c::2016/12/08-08:06:32.104 INFO  [DCM] Filter.CsvFltMessageTypeSetVolumeRevisionNumberByDeviceName(Sequence 62, Device \Device\HarddiskVolumeShadowCopy4, Revision=1)
00000a24.00000c8c::2016/12/08-08:06:32.104 INFO  [DCM] ClRtlCreateNetShare status 0 shareName b3a2e432-40e2-4d2e-bb17-1b0179a7d83d$, sharePath \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}\
00000a24.00000f58::2016/12/08-08:06:32.104 INFO  [DCM] Set NfltPauseComplete event for  b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000c8c::2016/12/08-08:06:32.104 INFO  [DCM] PauseNFilter completed for <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000c8c::2016/12/08-08:06:32.104 INFO  [DCM] GetSnapshotDeviceName: device name \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00000c8c::2016/12/08-08:06:32.104 INFO  [DCM] SetDownLevelNFilter: volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, target \Device\HarddiskVolumeShadowCopy4
00001134.000016c0::2016/12/08-08:06:32.104 INFO  [RES] Physical Disk <san10_vol16>: GET_DISK_NUMBER - Resource is online
00001134.000016c0::2016/12/08-08:06:32.104 INFO  [RES] Physical Disk <san10_vol16>: GET_DISK_NUMBER: Disk number is 15
00000a24.00000c8c::2016/12/08-08:06:32.119 INFO  [DCM] GetDiskGeometry: [Cylinders 802048, Media 12, TracksPerCylinder 255, SectorsPerTrack 63, BytesPerSector 512]
00000a24.00000c8c::2016/12/08-08:06:32.119 WARN  [DCM] IOCTL_STORAGE_QUERY_PROPERTY(StorageDeviceSeekPenaltyProperty): status 1
00000a24.00000c8c::2016/12/08-08:06:32.119 INFO  [DCM] STORAGE_PROPERTY_ID StorageDeviceResiliencyProperty not supported: status 87
00000a24.00000c8c::2016/12/08-08:06:32.119 INFO  [DCM] new VolumeStateChangeRequest: resource 1d554cbe-1803-4de9-a26a-ff6185924810, sequence 14:31
00000a24.00000c8c::2016/12/08-08:06:32.119 INFO  [DCM] processing VolumeStateChangeExRequest: resource 1d554cbe-1803-4de9-a26a-ff6185924810, sequence 14:31
00000a24.00000c8c::2016/12/08-08:06:32.119 INFO  [DCM] Push.AsyncActivateSnapshot for 1d554cbe-1803-4de9-a26a-ff6185924810
00000a24.000029c4::2016/12/08-08:06:32.119 INFO  [DCM] SyncHandler for san10_vol16: action dcm/activateSnapshot, sequence 14:31 ({HOSTNAME})
00000a24.00000f58::2016/12/08-08:06:32.119 INFO  [DCM] Set NfltPauseComplete event for  b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.119 INFO  [DCM] PauseNFilter completed for <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.119 INFO  [DCM] CreatePseudoVolume: remote vbus to \Device\SmbCsv\<local>\CSV$\:{db19d832-b034-46ed-a6c5-61e0ebe370d1}
00000a24.000029c4::2016/12/08-08:06:32.119 INFO  [DCM] CreatePseudoVolume: CREATE_PDO(1): UniqueId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, DeviceNumber 0, DiskNumber 2147483647, PartitionNumber 0, Offset 0, Length 6596932403200. Flags 0x2
00000a24.000029c4::2016/12/08-08:06:32.119 INFO  [DCM] FilterAgent: CREATE_PDO(2): Geometry [Cylinders 802048, Media 12, TracksPerCylinder 255, SectorsPerTrack 63, BytesPerSector 512], SerialNumber 0, TargetVolume \??\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, FSBlob \Device\SmbCsv\<local>\CSV$\:{db19d832-b034-46ed-a6c5-61e0ebe370d1}
00000a24.000029c4::2016/12/08-08:06:32.119 INFO  [DCM] MapVolume: waiting for mount(0): <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00001134.00001410::2016/12/08-08:06:32.229 INFO  [RES] Physical Disk: PNP: \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062} volume arrived
00001134.00001410::2016/12/08-08:06:32.229 INFO  [RES] Physical Disk: PnpAddVolume: Adding volume \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00001134.00001410::2016/12/08-08:06:32.229 INFO  [RES] Physical Disk: PnpAddVolume: Ignoring cluster pseudo volume \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00001134.00001410::2016/12/08-08:06:32.229 INFO  [RES] Physical Disk: PnpAddVolume: Add Volume exit, status 0
00000a24.00000fec::2016/12/08-08:06:32.229 INFO  [DCM] CsvFs Listener: state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, sequence , state CsvFsVolumeStateChangeFromPNP->CsvFsVolumeStatePaused, status 0x0]
00000a24.00000fec::2016/12/08-08:06:32.229 INFO  [DCM] CsvFs Listener: mount reported for volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, state CsvFsVolumeStateInit --> CsvFsVolumeStatePaused
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: Map <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] Filter.Attach CsvFlt (\??\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}\) => : status HrError(0x801f0012)
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] GetSnapshotDeviceName: device name \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] Filter.CsvFltMessageTypeSetVolumeRevisionNumberByDeviceName(Sequence 63, Device \Device\HarddiskVolumeShadowCopy4, Revision=1)
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: SetDownlevel: payload SetDownlevel(1): uniqueId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, target \??\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, diskNumber 0, offset 0, isLocal true, FS target \??\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, directIO true, targetNode 14
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: SetDownlevel(2): partition size 6596932403200, volume size 6596932337664, sector size 512, EnableCOW false, ForceWriteThrough 0, EnableBlockCache false, DiskConnectivityMdsNodeOnly, perfCnt 'snapshot(Volume16:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d)', sequence <14:31><15310>
00000a24.00000fec::2016/12/08-08:06:32.229 INFO  [DCM] CsvFs Listener: state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, sequence <14:31><15310>, state CsvFsVolumeStateChangeFromSetDownlevel->CsvFsVolumeStateSetDownlevel, status 0x0]
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: SetDownlevel completed: volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, status 0
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] GetSnapshotDeviceName: device name \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] SetDownLevelNFilter: volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, target \Device\HarddiskVolumeShadowCopy4
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] SetStorageProperties: <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] MapVolume: volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d mapped in mode MappedDirectIO, desired [DirectIO: true, Maint: false, EnableCOW: false], actual [DirectIO: true, Maint: false, EnableCOW: false], disk 0
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] ActivateNetworkPath b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000fec::2016/12/08-08:06:32.229 INFO  [DCM] CsvFs Listener: CsvFsNotificationBlockPathConnected state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83dstate CsvFsVolumeStateChangeFromSetDownlevel->CsvFsVolumeStateSetDownlevel, status 0x0]
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: ActivateNetworkPath completed: volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, status 0
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] ActivateVolume: <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: ChangeCsvFsState: uniqueId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, state CsvFsVolumeStateActive, sequence <14:31><15311>
00000a24.00000fec::2016/12/08-08:06:32.229 INFO  [DCM] CsvFs Listener: state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, sequence <14:31><15311>, state CsvFsVolumeStateChangeFromActivate->CsvFsVolumeStateActive, status 0x0]
00000a24.00000fec::2016/12/08-08:06:32.229 INFO  [DCM] UnmapPolicy::enter_CountingToGood(00000000-0000-0000-0000-000000000000): goodTimer R1...75, badTimer P0...150, state CountingToGood
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: Activate completed for <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] ActivateVolume: volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d activated in mode ActiveDirectIO
0000044c.00002198::2016/12/08-08:06:32.260 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002198::2016/12/08-08:06:32.260 INFO  [CAM] Token Created, Client Handle: 8001a35c
0000044c.00002198::2016/12/08-08:06:32.276 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002198::2016/12/08-08:06:32.276 INFO  [CAM] Token Created, Client Handle: 800247d0
0000044c.00002198::2016/12/08-08:06:32.308 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002198::2016/12/08-08:06:32.308 INFO  [CAM] Token Created, Client Handle: 80016d8c
00001134.00001ae8::2016/12/08-08:06:32.338 INFO  [RES] Physical Disk: PNP: PnpTriggerCSVMount opened \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] CreateSnapshotSet(exit): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, HrError(0x00000000)
00000a24.00001028::2016/12/08-08:06:32.369 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00001028::2016/12/08-08:06:32.369 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event GetSnapshotProperties
00000a24.00001028::2016/12/08-08:06:32.369 INFO  [DCM] GetSnapshotProperties: snapshotId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00001028::2016/12/08-08:06:32.369 INFO  [DCM] SnapshotApplicationInfo(Context): VSS_CTX_CLIENT_ACCESSIBLE
00000a24.00001028::2016/12/08-08:06:32.369 INFO  [DCM] GetSnapshotInfoFromDevice: snapshot \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00001028::2016/12/08-08:06:32.369 INFO  [DCM] GetSnapshotProperties: 0
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event GetSnapshotProperties
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] GetSnapshotProperties: snapshotId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] SnapshotApplicationInfo(Context): VSS_CTX_CLIENT_ACCESSIBLE
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] GetSnapshotInfoFromDevice: snapshot \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] GetSnapshotProperties: 0
00000a24.00001028::2016/12/08-08:06:32.604 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00001028::2016/12/08-08:06:32.604 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event PostFinalCommitSnapshots
00000a24.00001028::2016/12/08-08:06:32.604 INFO  [DCM] CreateSnapshotSet(enter): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, event PostFinalCommitSnapshots
00000a24.00000e00::2016/12/08-08:06:32.604 INFO  [DCM] SetVolumeSnapshotState: volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\, state DeleteCsvSnapshot, disableCOW false, instanceId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000e00::2016/12/08-08:06:32.604 INFO  [DCM] processing SetSnapshotState: resource 1d554cbe-1803-4de9-a26a-ff6185924810, volume 1d554cbe-1803-4de9-a26a-ff6185924810:135266304, uniqueId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000e00::2016/12/08-08:06:32.604 INFO  [DCM] Push.AsyncHandleSnapshot for 1d554cbe-1803-4de9-a26a-ff6185924810
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] SyncHandler for san10_vol16: action dcm/setSnapshotState, sequence b3a2e432-40e2-4d2e-bb17-1b0179a7d83d ({HOSTNAME})
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] SetSnapshotStateHandler(enter): volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\ state DeleteCsvSnapshot
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] Deleting snapshot pseudo volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000f58::2016/12/08-08:06:32.604 INFO  [DCM] Set NfltPauseComplete event for  b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] PauseNFilter completed for <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] CleanupNFilter <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] FilterAgent: ChangeCsvFsState: uniqueId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, state CsvFsVolumeStateInit, sequence <14:31><15312>
00000a24.00000fec::2016/12/08-08:06:32.604 INFO  [DCM] CsvFs Listener: state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, sequence <14:31><15312>, state CsvFsVolumeStateChangeFromInit->CsvFsVolumeStateInit, status 0x0]
00000a24.00000fec::2016/12/08-08:06:32.604 INFO  [DCM] CsvFs Listener: state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, sequence , state CsvFsVolumeStateChangeFromSurpriseRemoval->CsvFsVolumeStateSurpriseRemoved, status 0x0]
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] DeleteSnapshotPseudoVolume(exit): snapshot b3a2e432-40e2-4d2e-bb17-1b0179a7d83d status 0
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] SetSnapshotStateHandler(no-op): volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\ status 0
00001134.00001410::2016/12/08-08:06:32.604 INFO  [RES] Physical Disk: PNP: \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062} volume disappeared
00001134.00001410::2016/12/08-08:06:32.604 INFO  [RES] Physical Disk: PnpRemoveVolume: Removing volume \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00001134.00001410::2016/12/08-08:06:32.604 INFO  [RES] Physical Disk: PnpRemoveVolume: starting CSV removal notify thread for \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00001134.00001410::2016/12/08-08:06:32.604 INFO  [RES] Physical Disk: PnpRemoveVolume: volume removal: ignoring cluster pseudo volume \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00001134.0000134c::2016/12/08-08:06:32.604 INFO  [RES] Physical Disk: PnpNotifyCSVRemoval: device \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00000a24.00000c8c::2016/12/08-08:06:32.604 INFO  [RCM] PreProcessControl CLUSCTL_RESOURCE_STORAGE_CSV_UPDATE_INFO for 'san10_vol16'
00000a24.00000c8c::2016/12/08-08:06:32.604 INFO  [DCM] UpdateVolumeInfo: resource 'san10_vol16'
00000a24.00000c8c::2016/12/08-08:06:32.604 INFO  [DCM] UpdateVolumeInfo: volume removal for b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000ea4::2016/12/08-08:06:32.776 INFO  [GEM] Node 14: Deleting [5:4317 , 5:4317] (both included) as it has been ack'd by every node
0000044c.00001840::2016/12/08-08:06:33.510 INFO  [CAM] Token Created, Client Handle: 800021b8
0000044c.00001840::2016/12/08-08:06:33.541 INFO  [CAM] Token Created, Client Handle: 8001ca6c
00000a24.00001a94::2016/12/08-08:06:34.135 INFO  [GEM] Node 14: Deleting [6:57514 , 6:57514] (both included) as it has been ack'd by every node
00000a24.00000f4c::2016/12/08-08:06:34.151 INFO  [GEM] Node 14: Deleting [8:29750 , 8:29750] (both included) as it has been ack'd by every node
00000a24.000014ac::2016/12/08-08:06:34.166 INFO  [GEM] Node 14: Deleting [9:5161 , 9:5161] (both included) as it has been ack'd by every node
00000a24.00000f54::2016/12/08-08:06:34.166 INFO  [GEM] Node 14: Deleting [7:485 , 7:485] (both included) as it has been ack'd by every node
00000a24.00000e70::2016/12/08-08:06:34.182 INFO  [GEM] Node 14: Deleting [4:4161 , 4:4162] (both included) as it has been ack'd by every node
00000a24.00000e6c::2016/12/08-08:06:34.182 INFO  [GEM] Node 14: Deleting [1:59141 , 1:59148] (both included) as it has been ack'd by every node
00000a24.00000f04::2016/12/08-08:06:34.182 INFO  [GEM] Node 14: Deleting [10:2214 , 10:2224] (both included) as it has been ack'd by every node
00000a24.00000e74::2016/12/08-08:06:34.198 INFO  [GEM] Node 14: Deleting [11:40968 , 11:40969] (both included) as it has been ack'd by every node
00000a24.00000e88::2016/12/08-08:06:34.198 INFO  [GEM] Node 14: Deleting [3:27248 , 3:27248] (both included) as it has been ack'd by every node
00000a24.00002a04::2016/12/08-08:06:34.213 INFO  [GEM] Node 14: Deleting [12:3050 , 12:3051] (both included) as it has been ack'd by every node
00000a24.00000f10::2016/12/08-08:06:34.229 INFO  [GEM] Node 14: Deleting [2:63728 , 2:63729] (both included) as it has been ack'd by every node
00000a24.00000ea4::2016/12/08-08:06:34.260 INFO  [GEM] Node 14: Deleting [5:4318 , 5:4319] (both included) as it has been ack'd by every node
00000a24.00000e8c::2016/12/08-08:06:34.276 INFO  [GEM] Node 14: Deleting [13:33616 , 13:33617] (both included) as it has been ack'd by every node
00000a24.000026c0::2016/12/08-08:06:34.354 INFO  [RCM [RES] Virtual Machine {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.00002b68::2016/12/08-08:06:34.354 INFO  [RCM [RES] Virtual Machine {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.0000065c::2016/12/08-08:06:34.354 INFO  [RCM [RES] SCVMM {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.000026c0::2016/12/08-08:06:34.354 INFO  [RCM [RES] Virtual Machine {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.0000240c::2016/12/08-08:06:34.354 INFO  [RCM [RES] {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
0000044c.00001840::2016/12/08-08:06:34.885 INFO  [CAM] Token Created, Client Handle: 80017f28
0000044c.00001840::2016/12/08-08:06:35.010 INFO  [CAM] Token Created, Client Handle: 8001c96c
0000044c.00001840::2016/12/08-08:06:35.119 INFO  [CAM] Token Created, Client Handle: 80013c48
00000a24.00000e70::2016/12/08-08:06:35.182 INFO  [GEM] Node 14: Deleting [4:4163 , 4:4165] (both included) as it has been ack'd by every node
0000044c.00001840::2016/12/08-08:06:35.213 INFO  [CAM] Token Created, Client Handle: 800246a4
00000a24.00000f04::2016/12/08-08:06:35.213 INFO  [GEM] Node 14: Deleting [10:2225 , 10:2234] (both included) as it has been ack'd by every node
0000044c.00001840::2016/12/08-08:06:35.338 INFO  [CAM] Token Created, Client Handle: 800199b4
00000a24.00000f4c::2016/12/08-08:06:35.416 INFO  [GEM] Node 14: Deleting [8:29751 , 8:29753] (both included) as it has been ack'd by every node
0000044c.00001840::2016/12/08-08:06:35.448 INFO  [CAM] Token Created, Client Handle: 8000c18c
00000a24.000014ac::2016/12/08-08:06:35.666 INFO  [GEM] Node 14: Deleting [9:5162 , 9:5164] (both included) as it has been ack'd by every node
00000a24.00000f54::2016/12/08-08:06:35.854 INFO  [GEM] Node 14: Deleting [7:486 , 7:488] (both included) as it has been ack'd by every node
00000a24.00001a94::2016/12/08-08:06:36.166 INFO  [GEM] Node 14: Deleting [6:57515 , 6:57517] (both included) as it has been ack'd by every node
0000044c.00001840::2016/12/08-08:06:36.182 INFO  [CAM] Token Created, Client Handle: 800240b4
0000044c.00001840::2016/12/08-08:06:36.291 INFO  [CAM] Token Created, Client Handle: 800175a0
00000a24.00002a04::2016/12/08-08:06:36.385 INFO  [GEM] Node 14: Deleting [12:3052 , 12:3057] (both included) as it has been ack'd by every node
00000a24.00000f04::2016/12/08-08:06:36.448 INFO  [GEM] Node 14: Deleting [10:2235 , 10:2242] (both included) as it has been ack'd by every node
00000a24.00000e88::2016/12/08-08:06:36.619 INFO  [GEM] Node 14: Deleting [3:27249 , 3:27251] (both included) as it has been ack'd by every node
00000a24.00000e74::2016/12/08-08:06:36.963 INFO  [GEM] Node 14: Deleting [11:40970 , 11:40972] (both included) as it has been ack'd by every node
00000a24.00000f10::2016/12/08-08:06:37.401 INFO  [GEM] Node 14: Deleting [2:63730 , 2:63732] (both included) as it has been ack'd by every node
00000a24.00000f04::2016/12/08-08:06:37.448 INFO  [GEM] Node 14: Deleting [10:2243 , 10:2255] (both included) as it has been ack'd by every node
00000a24.00000e8c::2016/12/08-08:06:37.932 INFO  [GEM] Node 14: Deleting [13:33618 , 13:33623] (both included) as it has been ack'd by every node
00000a24.00000ea4::2016/12/08-08:06:38.088 INFO  [GEM] Node 14: Deleting [5:4320 , 5:4322] (both included) as it has been ack'd by every node
00001134.000016c0::2016/12/08-08:06:38.573 INFO  [RES] Physical Disk: HarddiskpIsPartitionHidden: device \Device\Harddisk15\ClusterPartition2 8000000000000000
00000a24.00000f10::2016/12/08-08:06:38.588 INFO  [GEM] Node 14: Deleting [2:63733 , 2:63735] (both included) as it has been ack'd by every node
00000a24.00000f04::2016/12/08-08:06:38.635 INFO  [GEM] Node 14: Deleting [10:2256 , 10:2265] (both included) as it has been ack'd by every node
00000a24.00000c8c::2016/12/08-08:06:38.729 INFO  [NM] Received request from client address {HOSTNAME}.
00001134.00001944::2016/12/08-08:06:38.760 INFO  [RES] Physical Disk: HarddiskpIsPartitionHidden: device \Device\Harddisk15\ClusterPartition2 8000000000000000
00000a24.000026f4::2016/12/08-08:06:38.776 INFO  [DCM] GetCsvVolumeMountpoints: resource 'san10_vol16' outBufSize 250
00000a24.000026f4::2016/12/08-08:06:38.776 INFO  [DCM] GetCsvVolumeMountpoints: resource 'san10_vol16' not in MM \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\
00000a24.000026f4::2016/12/08-08:06:38.776 INFO  [DCM] GetCsvVolumeMountpoints: resource 'san10_vol16' partition '2' returns 58 bytes and status 0
00000a24.00000e74::2016/12/08-08:06:38.869 INFO  [GEM] Node 14: Deleting [11:40973 , 11:40975] (both included) as it has been ack'd by every node
00000a24.00000e70::2016/12/08-08:06:39.182 INFO  [GEM] Node 14: Deleting [4:4166 , 4:4168] (both included) as it has been ack'd by every node
00000a24.00000ea4::2016/12/08-08:06:39.369 INFO  [GEM] Node 14: Deleting [5:4323 , 5:4325] (both included) as it has been ack'd by every node
0000044c.00002198::2016/12/08-08:06:40.963 INFO  [CAM] Token Created, Client Handle: 80001754
0000044c.00002344::2016/12/08-08:06:43.213 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002344::2016/12/08-08:06:43.213 INFO  [CAM] Token Created, Client Handle: 80023468
0000044c.00002344::2016/12/08-08:06:43.276 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002344::2016/12/08-08:06:43.276 INFO  [CAM] Token Created, Client Handle: 80020cf8
0000044c.00002344::2016/12/08-08:06:44.494 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002344::2016/12/08-08:06:44.494 INFO  [CAM] Token Created, Client Handle: 8001b760
00000a24.00001028::2016/12/08-08:06:44.530 INFO  [DCM] Provider::CreateSnapshotSet(enter): Event PostFinalCommitSnapshots, SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2
00000a24.00001028::2016/12/08-08:06:44.530 INFO  [DCM] DeleteProviderInterface()
00000a24.00001028::2016/12/08-08:06:44.530 INFO  [DCM] Provider::CreateSnapshotSet(exit): SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2 HrError(0x00000000)
00000a24.00001028::2016/12/08-08:06:44.530 INFO  [DCM] SetSnapshotAutoRecoverAttribute: device \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, readOnly SetSnapshotReadOnly::Yes
00000a24.00001028::2016/12/08-08:06:44.541 INFO  [DCM] SetSnapshotSetAutoRecovery: snapshot 2e6f6710-bcaf-11e6-80d5-98be943702d0, HrError(0x00000000)
00000a24.00001028::2016/12/08-08:06:44.541 INFO  [DCM] Sending revision list. Message id 14:11
00000a24.00001028::2016/12/08-08:06:44.541 INFO  [DCM] Sending revision list completed on all nodes. Message id 14:11
00000a24.00001028::2016/12/08-08:06:44.541 INFO  [DCM] GetSnapshotDeviceName: device name \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00001028::2016/12/08-08:06:44.666 INFO  [DCM] FilterAttach hr HrError(0x00000000) device \Device\HarddiskVolumeShadowCopy4
00000a24.00001028::2016/12/08-08:06:44.666 INFO  [DCM] Filter.CsvFltMessageTypeSetVolumeRevisionNumberByDeviceName(Sequence 64, Device \Device\HarddiskVolumeShadowCopy4, Revision=2)
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] ClRtlCreateNetShare status 0 shareName b3a2e432-40e2-4d2e-bb17-1b0179a7d83d$, sharePath \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}\
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] SetCsvAppInfoGuid(): snapshot \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00000f58::2016/12/08-08:06:44.682 INFO  [DCM] Set NfltPauseComplete event for  b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] PauseNFilter completed for <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] GetSnapshotDeviceName: device name \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] SetDownLevelNFilter: volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, target \Device\HarddiskVolumeShadowCopy4
00001134.00001944::2016/12/08-08:06:44.682 INFO  [RES] Physical Disk <san10_vol16>: GET_DISK_NUMBER - Resource is online
00001134.00001944::2016/12/08-08:06:44.682 INFO  [RES] Physical Disk <san10_vol16>: GET_DISK_NUMBER: Disk number is 15
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] GetDiskGeometry: [Cylinders 802048, Media 12, TracksPerCylinder 255, SectorsPerTrack 63, BytesPerSector 512]
00000a24.00001028::2016/12/08-08:06:44.682 WARN  [DCM] IOCTL_STORAGE_QUERY_PROPERTY(StorageDeviceSeekPenaltyProperty): status 1
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] STORAGE_PROPERTY_ID StorageDeviceResiliencyProperty not supported: status 87
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] new VolumeStateChangeRequest: resource 1d554cbe-1803-4de9-a26a-ff6185924810, sequence 14:32
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] processing VolumeStateChangeExRequest: resource 1d554cbe-1803-4de9-a26a-ff6185924810, sequence 14:32
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] Push.AsyncActivateSnapshot for 1d554cbe-1803-4de9-a26a-ff6185924810
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] SyncHandler for san10_vol16: action dcm/activateSnapshot, sequence 14:32 ({HOSTNAME})
00000a24.00000f58::2016/12/08-08:06:44.682 INFO  [DCM] Set NfltPauseComplete event for  b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] PauseNFilter completed for <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] CreatePseudoVolume: remote vbus to \Device\SmbCsv\<local>\CSV$\:{db19d832-b034-46ed-a6c5-61e0ebe370d1}
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] CreatePseudoVolume: CREATE_PDO(1): UniqueId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, DeviceNumber 0, DiskNumber 2147483647, PartitionNumber 0, Offset 0, Length 6596932403200. Flags 0x3
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] FilterAgent: CREATE_PDO(2): Geometry [Cylinders 802048, Media 12, TracksPerCylinder 255, SectorsPerTrack 63, BytesPerSector 512], SerialNumber 0, TargetVolume \??\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, FSBlob \Device\SmbCsv\<local>\CSV$\:{db19d832-b034-46ed-a6c5-61e0ebe370d1}
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] MapVolume: waiting for mount(0): <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00001134.00001410::2016/12/08-08:06:44.682 INFO  [RES] Physical Disk: PNP: \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062} volume arrived
00001134.00001410::2016/12/08-08:06:44.682 INFO  [RES] Physical Disk: PnpAddVolume: Adding volume \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00000a24.00000fec::2016/12/08-08:06:44.682 INFO  [DCM] CsvFs Listener: state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, sequence , state CsvFsVolumeStateChangeFromPNP->CsvFsVolumeStatePaused, status 0x0]
00000a24.00000fec::2016/12/08-08:06:44.682 INFO  [DCM] CsvFs Listener: mount reported for volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, state CsvFsVolumeStateInit --> CsvFsVolumeStatePaused
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] FilterAgent: Map <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00001134.00001410::2016/12/08-08:06:44.682 INFO  [RES] Physical Disk: PnpAddVolume: Ignoring cluster pseudo volume \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00001134.00001410::2016/12/08-08:06:44.682 INFO  [RES] Physical Disk: PnpAddVolume: Add Volume exit, status 0
This correlates with the event id 833 that we're seeing on a SQL server that sits on the above "san10_vol16".

Code: Select all

Log Name:      Application
Source:        {DATABASE NAME}
Date:          8/12/2016 8:06:41 a.m.
Event ID:      833
Task Category: Server
Level:         Information
Keywords:      Classic
User:          N/A
Computer:      {VM NAME}
Description:
SQL Server has encountered 5 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\path\to\database\name.ldf] in database id 7.  The OS file handle is 0x00000000000009D8.  The offset of the latest long I/O is: 0x00000029cf1000
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="{DATABASE NAME}" />
    <EventID Qualifiers="16384">833</EventID>
    <Level>4</Level>
    <Task>2</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2016-12-07T19:06:41.000000000Z" />
    <EventRecordID>150029</EventRecordID>
    <Channel>Application</Channel>
    <Computer>{COMPUTER NAME}</Computer>
    <Security />
  </System>
  <EventData>
    <Data>5</Data>
    <Data>15</Data>
    <Data>C:\path\to\database\name.ldf</Data>
    <Data>7</Data>
    <Data>00000000000009D8</Data>
    <Data>0x00000029cf1000</Data>
    <Binary>410300000A000000150000004800570052004C0059004E00430046004500300031005C005200540043004C004F00430041004C00000000000000</Binary>
  </EventData>
</Event>
We have also run some performance counters during backups and we can definitely see the write latency and disk queue lengths on the CSV spike for 30 seconds or less when the Veeam CSV snapshot is initiated.

The question we have is - is this expected behavior? Should guest VM's suffer from IO issues when the CSV is being snapshotted?


Thanks!
Pter.
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: CSV IO high latency/queues when running backups

Post by Vitaliy S. »

Hi Pter,

The load on the backed up VM and the datastore which hosts this VM is expected, that's why it is recommended to do backup outside the business hours window. As regards your particular situation, I would first check what else is consuming datastore resources and would probably try to optimize (re-group) VMs on the datastore.

Thanks!
nmdange
Veteran
Posts: 528
Liked: 144 times
Joined: Aug 20, 2015 9:30 pm
Contact:

Re: CSV IO high latency/queues when running backups

Post by nmdange »

This is one of the reasons why Microsoft introduced the new resilient change tracking feature into Hyper-V 2016 which no longer requires VSS snapshots on the CSVs.
pterpumpkin
Enthusiast
Posts: 36
Liked: 4 times
Joined: Jun 14, 2016 9:36 am
Full Name: Pter Pumpkin
Contact:

Re: CSV IO high latency/queues when running backups

Post by pterpumpkin »

Thanks guys!

How do other people deal with this issue and not cause interruption to servers? We need to run backups/replication jobs during business hours.
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: CSV IO high latency/queues when running backups

Post by Vitaliy S. »

One thing that people do is change default timeouts (in case of clusters) or run less concurrent tasks against the same datastore. I would also consider looking at Hyper-V 2016 as nmdange advises.
pterpumpkin
Enthusiast
Posts: 36
Liked: 4 times
Joined: Jun 14, 2016 9:36 am
Full Name: Pter Pumpkin
Contact:

Re: CSV IO high latency/queues when running backups

Post by pterpumpkin »

Thanks Vitaliy!

Are you able to provide more specific details around the default timeout's your referring to? Are these Veeam timeouts, or cluster/csv timeouts?
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: CSV IO high latency/queues when running backups

Post by Vitaliy S. »

Pter, the specifics depend on the exact issue you're observing. If you search these forums you will see timeout settings for SQL Server Clusters and AlwaysOn deployments. I would recommend to search for the recommendation that will help in your case based on the Windows Event log entries (exact IDs of the events you see).
pterpumpkin
Enthusiast
Posts: 36
Liked: 4 times
Joined: Jun 14, 2016 9:36 am
Full Name: Pter Pumpkin
Contact:

Re: CSV IO high latency/queues when running backups

Post by pterpumpkin »

Thanks Vitaliy.

My question was more geared towards where you were suggesting to change the timeout setting(s) e.g. CSV timeouts, SQL timeouts, cluster timeouts, iSCSI timeouts etc... But i see now you were referring to SQL timeouts :)
Vitaliy S.
VP, Product Management
Posts: 27377
Liked: 2800 times
Joined: Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov
Contact:

Re: CSV IO high latency/queues when running backups

Post by Vitaliy S. »

Yes, you cannot change storage, hardware behavior during I/O intensive operations, but you can adjust timeouts for the applications that are suffering from low I/O performance.
jdixon
Novice
Posts: 8
Liked: 1 time
Joined: Nov 23, 2016 4:30 pm
Full Name: Jacob Dixon
Contact:

Re: CSV IO high latency/queues when running backups

Post by jdixon »

I have the exact same problem. I verified it isn't my SAN that is the issue and it appears it isn't my fiber links.. so I think its the HBA. We have a 3PAR 12Gbps SAS all SSD SAN with four 8Gbps fiber paths from switch to SAN and two fiber paths from switch to servers.

HP verified it isn't the SAN so i'm really at a loss as to what it is.
pterpumpkin
Enthusiast
Posts: 36
Liked: 4 times
Joined: Jun 14, 2016 9:36 am
Full Name: Pter Pumpkin
Contact:

Re: CSV IO high latency/queues when running backups

Post by pterpumpkin »

Hi jdixon,

From what I now understand, this is expected behaviour of a CSV.

I have read that this will be resolved in server 2016, but we're yet to confirm this.

I believe that smaller CSVs will be less affected as the CSV snapshot will be quicker.

Do off host proxy hardware based backups take a snapshot of the csv? If not, this could also be another fix.



Pter.
foggy
Veeam Software
Posts: 21139
Liked: 2141 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: CSV IO high latency/queues when running backups

Post by foggy »

pterpumpkin wrote:Do off host proxy hardware based backups take a snapshot of the csv? If not, this could also be another fix.
Yes, in this case snapshot is mounted onto an offhost proxy.
RWeiss
Enthusiast
Posts: 36
Liked: 2 times
Joined: Nov 13, 2014 10:29 am
Contact:

Re: CSV IO high latency/queues when running backups

Post by RWeiss »

We are having the same issue and from what I've read, the issue is that when using Software VSS, CSV switches basically to redirected I/O when VMs are backed up that are not on the owner node. Without a hardware VSS, I do not believe there is a way to reduce the impact.
nmdange
Veteran
Posts: 528
Liked: 144 times
Joined: Aug 20, 2015 9:30 pm
Contact:

Re: CSV IO high latency/queues when running backups

Post by nmdange »

RWeiss wrote:We are having the same issue and from what I've read, the issue is that when using Software VSS, CSV switches basically to redirected I/O when VMs are backed up that are not on the owner node. Without a hardware VSS, I do not believe there is a way to reduce the impact.
If you use a scale-out file server cluster to present the storage to Hyper-V over SMB, then that would eliminate redirected I/O.
jaredplaisted
Service Provider
Posts: 3
Liked: never
Joined: Dec 31, 2012 1:16 am
Full Name: jaz
Contact:

Re: CSV IO high latency/queues when running backups

Post by jaredplaisted »

Just a quick question how many VM's per CSV do you have?

Sharing something we encountered about 2 years ago when we first started using Hyper-V 2012 R2 which maybe relevant.

We used to have similar issues to what you describe, back then we had large CSV's (3PAR) with approximately 20-30 vms per CSV and experienced latency and VM freezing temporarily when ever backups ran, even if backing up a single VM every other VM in the same CSV would get hit bythe issue, regardless of throttling limits, software or hardware vss provider the issue persisted. This freezing/latency would only occur during the first part of the Backup job while the snapshot is being created, after the snapshot was created and the backup began reading/transferring data performance would return to normal.

Since we re-architected our CSV's and reduced the number of VM's per CSV down to 5-10 we no longer encounter the issue, we can run mutliple concurrent backups across multiple CSV's and don't experience any latency or issues now.
pterpumpkin
Enthusiast
Posts: 36
Liked: 4 times
Joined: Jun 14, 2016 9:36 am
Full Name: Pter Pumpkin
Contact:

Re: CSV IO high latency/queues when running backups

Post by pterpumpkin »

jaredplaisted wrote:Just a quick question how many VM's per CSV do you have?

Sharing something we encountered about 2 years ago when we first started using Hyper-V 2012 R2 which maybe relevant.

We used to have similar issues to what you describe, back then we had large CSV's (3PAR) with approximately 20-30 vms per CSV and experienced latency and VM freezing temporarily when ever backups ran, even if backing up a single VM every other VM in the same CSV would get hit bythe issue, regardless of throttling limits, software or hardware vss provider the issue persisted. This freezing/latency would only occur during the first part of the Backup job while the snapshot is being created, after the snapshot was created and the backup began reading/transferring data performance would return to normal.

Since we re-architected our CSV's and reduced the number of VM's per CSV down to 5-10 we no longer encounter the issue, we can run mutliple concurrent backups across multiple CSV's and don't experience any latency or issues now.
We do indeed have a large number of VM's per CSV (20-30). Reducing the number of VM's and increasing the number of CSV's could be a good idea.
Andreas Neufert
VP, Product Management
Posts: 7080
Liked: 1511 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: CSV IO high latency/queues when running backups

Post by Andreas Neufert »

Another customer had an issue with ODX and the Storage.

Deaktivation of the SSD trim helped in that case. Please check with Microsoft Support if this is the case.

The solution was the following reg key:
Set-ItemProperty hklm:\system\currentcontrolset\control\filesystem -Name "DisableDeleteNotification" -Value 1
RWeiss
Enthusiast
Posts: 36
Liked: 2 times
Joined: Nov 13, 2014 10:29 am
Contact:

Re: CSV IO high latency/queues when running backups

Post by RWeiss »

We are experiencing the same issue. Ticket is open #04589107.
Glad I found this thread, as it describes exactly what we're experiencing. We didn't have any issues for years. In our case, it looks like it even locks up the volume if a snapshot is performed on another CSV (which is on the same SAN). We don't have any known issues with the SAN and response times otherwise are absolutely great, but as soon as a single snapshot is created anywhere, we get huge latency issues - we see response time in the region of 30+ SECONDS which is unacceptable in our case (and I guess most cases...).

In the event log of the VMs I see entries like this:
svchost (1772) A request to write to the file "C:\Windows\system32\LogFiles\Sum\Svc.log" at offset 3710976 (0x000000000038a000) for 4096 (0x00001000) bytes succeeded, but took an abnormally long time (23 seconds) to be serviced by the OS. In addition, 0 other I/O requests to this file have also taken an abnormally long time to be serviced since the last message regarding this problem was posted 64 seconds ago. This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem.

We are planning to move to Server 2019 soon (TM), so hopefully this fixes the issue because at this point I really don't know how I can solve this with the setup we have. Reducing the number of VMs per CSV could help, but I need to do additional testing as currently as it stands, it seems that the effects are visible accross CSVs which is of course very bad.

We actually already disabled ODX because we were very unhappy with its performance (actually on a different node and storage pool that is not part of this cluster, so I don't think this is related - in the end we had transfers between volumes of 50MB/s and without ODX we reached 1GB/s+).
Andreas Neufert
VP, Product Management
Posts: 7080
Liked: 1511 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: CSV IO high latency/queues when running backups

Post by Andreas Neufert »

Thanks for sharing. I think the ODX one and TRIM are seperate things. Please allow me to share below my list of things to avoid and how to address. Maybe helpful in this case.

Maybe coordinate with our support a test with the SnapShot Helper (see below). This will give you a baseline check if the issue is related to Veeam or not.
Then disable the TRIM and ODX and see if this fixes it.

- IO freezes? Disbale TRIM and ODX
TRIM: Set-ItemProperty hklm:\system\currentcontrolset\control\filesystem -Name "DisableDeleteNotification" -Value 1
ODX: Set-ItemProperty hklm:systemcurrentcontrolsetcontrolfilesystem -Name "FilterSupportedFeaturesMode" -Value 1

Note: Veeam Support can give you a "Snapshot Helper" tool that allows you to create VM Backup Snapshots without our Software involved. It uses native Windows commands. That way you can analyse if the issue is related to our software or a general issue.

- Check Antivirus exclusion list of Hyper-V and Veeam
http://social.technet.microsoft.com/wik ... hosts.aspx
https://www.veeam.com/kb1999
- Check for SnapShot redirections => very bad idea => Setting of the VM - Management - Checkpoint File Location
- Don´t touch the pagefile settings. Microsoft defaults are working fine.
- Hardware VSS provider: Check firmware, driver and provider compatibility
- Don´t use checkpoints (Backup may fail) => use Veeam Quick Backups => Remove checkpoints
- Check for Firmware Update that support Hyper-V and use correct driver.
- Windows Update is NOT enough. Check Microsoft Lists for Hyper-V and Cluster
- Don´t mix AMD/Intel in the same cluster. Keep „instruction set“ the same.
- Keep OS and VM data on seperate disks
- Keep the Hosts equaly (NIC & Storage Controllers !)
- No other roles on the hyper-v host
- No Inguest iSCSI/Path trough disks
- Only ReFS and NTFS are supported.
- If you run Win2003 or SQL2000 disable backup guest interaction + KB940349
- Disk Manager – „System reserved“ need at min 42MB free
- VM – Integration Services – Backup enabled ?
- Hyper-V Volume Shadow Copy Requestor Service started ?
- Use Jumbo frames and use logo tested NICs
- 64kb disk allocation size for VHD(x)
- Consider running Hyper-V with „high performance“ not default „balanced“
- Min 300MB free disk space per disk (local/CSV/any)
- Latest Integration components
Post Reply

Who is online

Users browsing this forum: No registered users and 32 guests