CSV IO high latency/queues when running backups

Hyper-V specific discussions

CSV IO high latency/queues when running backups

Veeam Logoby pterpumpkin » Fri Dec 09, 2016 12:20 am

Hi all!

We have recently discovered an issue where guest VM's on our 2012 R2 cluster are suffering from IO latency/queues when Veeam snapshots the CSV that they're on.

We have been using the specific Windows Event Viewer ID 833 on a SQL server that has been impacted by this issue as reference.

The below log excerpt is taken from the cluster logs after a Veeam backup job has bee initiated:

Code: Select all
00000a24.00000c8c::2016/12/08-08:06:19.526 INFO  [DCM] BeginPrepareSnapshot: set snapshot state for Volume16:0467edbf-2759-41da-b9e1-5227480ff6f7
00000a24.00000c8c::2016/12/08-08:06:19.573 INFO  [DCM] DeleteProviderInterface()
00000a24.00000c8c::2016/12/08-08:06:19.604 INFO  [DCM] CreateProviderInterface(): HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:19.604 INFO  [DCM] SetContext(exit): context VSS_CTX_CLIENT_ACCESSIBLE, HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:19.604 INFO  [DCM] Provider::BeginPrepareSnapshot(enter): volume \\?\GLOBALROOT\Device\Harddisk15\ClusterPartition2\ context VSS_CTX_CLIENT_ACCESSIBLE
00000a24.00000c8c::2016/12/08-08:06:19.604 INFO  [DCM] Provider::BeginPrepareSnapshot(exit): volume \\?\GLOBALROOT\Device\Harddisk15\ClusterPartition2\, HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:19.604 INFO  [DCM] InsertIntoSnapshotMap(): snapshotId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d<-->00000000-0000-0000-0000-000000000000, snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, context VSS_CTX_BACKUP, requester 12, pseudoVolume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\, realVolume \\?\GLOBALROOT\Device\Harddisk15\ClusterPartition2\, length 0, snapshotVolume \\?\Volume{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}, resourceId 1d554cbe-1803-4de9-a26a-ff6185924810, collection size 1
00000a24.000026c0::2016/12/08-08:06:20.135 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.000026c0::2016/12/08-08:06:20.135 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event EndPrepareSnapshots
00000a24.000026c0::2016/12/08-08:06:20.135 INFO  [DCM] CreateSnapshotSet(enter): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, event EndPrepareSnapshots
00000a24.000026c0::2016/12/08-08:06:20.135 INFO  [DCM] Provider::CreateSnapshotSet(enter): Event EndPrepareSnapshots, SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2
00000a24.00000c8c::2016/12/08-08:06:21.901 INFO  [RCM [RES] Virtual Machine {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
0000044c.00002344::2016/12/08-08:06:22.526 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002344::2016/12/08-08:06:22.526 INFO  [CAM] Token Created, Client Handle: 80002b78
0000044c.000024d0::2016/12/08-08:06:22.526 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.000024d0::2016/12/08-08:06:22.526 INFO  [CAM] Token Created, Client Handle: 80003aa4
00000a24.000026c0::2016/12/08-08:06:23.026 INFO  [DCM] Provider::CreateSnapshotSet(exit): SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2 HrError(0x00000000)
00000a24.000026c0::2016/12/08-08:06:23.026 INFO  [DCM] CreateSnapshotSet(exit): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, HrError(0x00000000)
00000a24.000026c0::2016/12/08-08:06:23.510 INFO  [RCM [RES] {VM NAME} (1) embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
0000044c.000024d0::2016/12/08-08:06:23.760 INFO  [CAM] Token Created, Client Handle: 8001b598
00000a24.00000c8c::2016/12/08-08:06:23.791 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00000c8c::2016/12/08-08:06:23.791 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event PreCommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:23.791 INFO  [DCM] CreateSnapshotSet(enter): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, event PreCommitSnapshots
00000a24.00000e00::2016/12/08-08:06:23.791 INFO  [DCM] SetVolumeSnapshotState: volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\, state PreCommit, disableCOW true, instanceId 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2:12
00000a24.00000e00::2016/12/08-08:06:23.791 INFO  [DCM] processing SetSnapshotState: resource 1d554cbe-1803-4de9-a26a-ff6185924810, volume 1d554cbe-1803-4de9-a26a-ff6185924810:135266304, uniqueId 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2:12
00000a24.00000e00::2016/12/08-08:06:23.791 INFO  [DCM] Push.AsyncHandleSnapshot for 1d554cbe-1803-4de9-a26a-ff6185924810
00000a24.000029c4::2016/12/08-08:06:23.791 INFO  [DCM] SyncHandler for san10_vol16: action dcm/setSnapshotState, sequence 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2:12 ({HOSTNAME})
00000a24.000029c4::2016/12/08-08:06:23.791 INFO  [DCM] SetSnapshotStateHandler(enter): volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\ state PreCommit
00000a24.00000fec::2016/12/08-08:06:23.807 INFO  [DCM] CsvFs Listener: snapshot [volume 0467edbf-2759-41da-b9e1-5227480ff6f7, state CsvFsSnapshotStatePrepare, sequence 15308]
00000a24.000029c4::2016/12/08-08:06:23.807 INFO  [DCM] SetSnapshotStateHandler: volume Volume16:0467edbf-2759-41da-b9e1-5227480ff6f7, redirect count 1, status 0
00000a24.00000c8c::2016/12/08-08:06:23.823 INFO  [DCM] Provider::CreateSnapshotSet(enter): Event PreCommitSnapshots, SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2
00000a24.00000c8c::2016/12/08-08:06:23.823 INFO  [DCM] Provider::CreateSnapshotSet(exit): SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2 HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:23.823 INFO  [DCM] CreateSnapshotSet(exit): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, HrError(0x00000000)
00000a24.00000ff8::2016/12/08-08:06:23.838 INFO  [DCM] Filter message received: snapshotSet [8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, 1]
00000a24.00000ff8::2016/12/08-08:06:23.838 INFO  [DCM] LockVolumesOnSrvPreFlushAndHold: snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, volume count 1, status 0, handle HDL( ffffe0019854f4f0 )
00000a24.00000ff8::2016/12/08-08:06:23.838 INFO  [DCM] SnapshotAgent::GetSnapshotSetInfoHandler(exit): snapshotSet [8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, 1]
00000a24.00000ff8::2016/12/08-08:06:23.838 INFO  [DCM] Filter reply sent: HrError(0x00000000) snapshotSet [8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, 1]
00000a24.00000c8c::2016/12/08-08:06:23.838 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00000c8c::2016/12/08-08:06:23.838 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event CommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:23.838 INFO  [DCM] CreateSnapshotSet(enter): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, event CommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:23.838 INFO  [DCM] Provider::CreateSnapshotSet(enter): Event CommitSnapshots, SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2
00000a24.00000c8c::2016/12/08-08:06:23.838 INFO  [DCM] Provider::CreateSnapshotSet(exit): SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2 HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:23.838 INFO  [DCM] CreateSnapshotSet(exit): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event PostCommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] CreateSnapshotSet(enter): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, event PostCommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] UnlockVolumesOnSrvPostRelease: snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, status 0, handle HDL( ffffe0019854f4f0 )
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] Filter.CsvFltMessageTypePostSnapshotCreate(LookupByName false, Device 15, Partition 2, Name )
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] Provider::CreateSnapshotSet(enter): Event PostCommitSnapshots, SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2
00000a24.00000c8c::2016/12/08-08:06:23.854 INFO  [DCM] Provider::CreateSnapshotSet(exit): SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2 HrError(0x00000000)
00000a24.00001a94::2016/12/08-08:06:24.604 INFO  [GEM] Node 14: Deleting [6:57513 , 6:57513] (both included) as it has been ack'd by every node
00000a24.00000f4c::2016/12/08-08:06:24.619 INFO  [GEM] Node 14: Deleting [8:29749 , 8:29749] (both included) as it has been ack'd by every node
00000a24.000014ac::2016/12/08-08:06:24.635 INFO  [GEM] Node 14: Deleting [9:5160 , 9:5160] (both included) as it has been ack'd by every node
00000a24.00000f54::2016/12/08-08:06:24.635 INFO  [GEM] Node 14: Deleting [7:484 , 7:484] (both included) as it has been ack'd by every node
00000a24.00000f04::2016/12/08-08:06:24.635 INFO  [GEM] Node 14: Deleting [10:2212 , 10:2213] (both included) as it has been ack'd by every node
00000a24.00000e70::2016/12/08-08:06:24.651 INFO  [GEM] Node 14: Deleting [4:4159 , 4:4160] (both included) as it has been ack'd by every node
00000a24.00000e88::2016/12/08-08:06:24.651 INFO  [GEM] Node 14: Deleting [3:27247 , 3:27247] (both included) as it has been ack'd by every node
00000a24.00000e6c::2016/12/08-08:06:24.666 INFO  [GEM] Node 14: Deleting [1:59139 , 1:59140] (both included) as it has been ack'd by every node
00000a24.00002a04::2016/12/08-08:06:24.666 INFO  [GEM] Node 14: Deleting [12:3048 , 12:3049] (both included) as it has been ack'd by every node
00000a24.00000e74::2016/12/08-08:06:24.666 INFO  [GEM] Node 14: Deleting [11:40966 , 11:40967] (both included) as it has been ack'd by every node
00000a24.00000f10::2016/12/08-08:06:24.682 INFO  [GEM] Node 14: Deleting [2:63726 , 2:63727] (both included) as it has been ack'd by every node
00000a24.00000ea4::2016/12/08-08:06:24.713 INFO  [GEM] Node 14: Deleting [5:4315 , 5:4316] (both included) as it has been ack'd by every node
00000a24.00000e8c::2016/12/08-08:06:24.744 INFO  [GEM] Node 14: Deleting [13:33614 , 13:33615] (both included) as it has been ack'd by every node
00000a24.00000e00::2016/12/08-08:06:25.182 INFO  [DCM] SetVolumeSnapshotState: volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\, state PostCommit, disableCOW false, instanceId 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2:12
00000a24.00000e00::2016/12/08-08:06:25.182 INFO  [DCM] processing SetSnapshotState: resource 1d554cbe-1803-4de9-a26a-ff6185924810, volume 1d554cbe-1803-4de9-a26a-ff6185924810:135266304, uniqueId 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2:12
00000a24.00000e00::2016/12/08-08:06:25.182 INFO  [DCM] Push.AsyncHandleSnapshot for 1d554cbe-1803-4de9-a26a-ff6185924810
00000a24.000029c4::2016/12/08-08:06:25.182 INFO  [DCM] SyncHandler for san10_vol16: action dcm/setSnapshotState, sequence 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2:12 ({HOSTNAME})
00000a24.000029c4::2016/12/08-08:06:25.182 INFO  [DCM] SetSnapshotStateHandler(enter): volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\ state PostCommit
00000a24.00000fec::2016/12/08-08:06:25.213 INFO  [DCM] CsvFs Listener: snapshot [volume 0467edbf-2759-41da-b9e1-5227480ff6f7, state CsvFsSnapshotStateCommit, sequence 15309]
00000a24.000029c4::2016/12/08-08:06:25.213 INFO  [DCM] SetSnapshotStateHandler: volume Volume16:0467edbf-2759-41da-b9e1-5227480ff6f7, redirect count 0, status 0
00000a24.000026c0::2016/12/08-08:06:26.651 INFO  [RCM [RES] {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.00001a90::2016/12/08-08:06:26.651 INFO  [RCM [RES] Virtual Machine {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.00001ab0::2016/12/08-08:06:26.651 INFO  [RCM [RES] Virtual Machine {VM NAME} Resources embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.000026f4::2016/12/08-08:06:29.260 INFO  [RCM [RES] SCVMM {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.00000c8c::2016/12/08-08:06:31.776 INFO  [DCM] UpdateDatabaseForSnapshotSet: resource san10_vol16
00000a24.00000c8c::2016/12/08-08:06:31.776 INFO  [GUM] Node 14: Processing RequestLock 14:772
00000a24.00000ea4::2016/12/08-08:06:31.776 INFO  [GUM] Node 14: Processing GrantLock to 14 (sent by 5 gumid: 972499)
00000a24.00000c8c::2016/12/08-08:06:31.776 INFO  [GUM] Node 14: executing request locally, gumId:972500, my action: snap/addSnapshotInfo, # of updates: 1
00000a24.00000c8c::2016/12/08-08:06:31.776 INFO  [DCM] GumAddSnapshotInfo: payload size 96
00000a24.00000c8c::2016/12/08-08:06:31.776 INFO  [DCM] db.CreateDcmSnapshotVolume snapshot <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d on volume Volume16:0467edbf-2759-41da-b9e1-5227480ff6f7
00000a24.00000c8c::2016/12/08-08:06:31.776 INFO  [DCM] RefreshSnapshotMap(DcmDb insert): volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000c8c::2016/12/08-08:06:31.791 INFO  [DCM] PostCommitSnapshots(exit): 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:31.791 INFO  [DCM] CreateSnapshotSet(exit): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:31.807 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00000c8c::2016/12/08-08:06:31.807 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event PreFinalCommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:31.807 INFO  [DCM] CreateSnapshotSet(enter): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, event PreFinalCommitSnapshots
00000a24.00000c8c::2016/12/08-08:06:31.807 INFO  [DCM] Provider::CreateSnapshotSet(enter): Event PreFinalCommitSnapshots, SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2
00000a24.00000c8c::2016/12/08-08:06:31.838 INFO  [DCM] Provider::CreateSnapshotSet(exit): SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2 HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:31.838 INFO  [DCM] SetSnapshotAutoRecoverAttribute: device \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, readOnly SetSnapshotReadOnly::No
00000a24.00000c8c::2016/12/08-08:06:31.838 INFO  [DCM] SetSnapshotSetAutoRecovery: snapshot 2e6f6710-bcaf-11e6-80d5-98be943702d0, HrError(0x00000000)
00000a24.00000c8c::2016/12/08-08:06:31.838 INFO  [DCM] Sending revision list. Message id 14:10
00000a24.00000c8c::2016/12/08-08:06:31.854 INFO  [DCM] Sending revision list completed on all nodes. Message id 14:10
00000a24.00000c8c::2016/12/08-08:06:31.854 INFO  [DCM] GetSnapshotDeviceName: device name \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00000c8c::2016/12/08-08:06:32.104 INFO  [DCM] FilterAttach hr HrError(0x00000000) device \Device\HarddiskVolumeShadowCopy4
00000a24.00000c8c::2016/12/08-08:06:32.104 INFO  [DCM] Filter.CsvFltMessageTypeSetVolumeRevisionNumberByDeviceName(Sequence 62, Device \Device\HarddiskVolumeShadowCopy4, Revision=1)
00000a24.00000c8c::2016/12/08-08:06:32.104 INFO  [DCM] ClRtlCreateNetShare status 0 shareName b3a2e432-40e2-4d2e-bb17-1b0179a7d83d$, sharePath \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}\
00000a24.00000f58::2016/12/08-08:06:32.104 INFO  [DCM] Set NfltPauseComplete event for  b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000c8c::2016/12/08-08:06:32.104 INFO  [DCM] PauseNFilter completed for <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000c8c::2016/12/08-08:06:32.104 INFO  [DCM] GetSnapshotDeviceName: device name \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00000c8c::2016/12/08-08:06:32.104 INFO  [DCM] SetDownLevelNFilter: volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, target \Device\HarddiskVolumeShadowCopy4
00001134.000016c0::2016/12/08-08:06:32.104 INFO  [RES] Physical Disk <san10_vol16>: GET_DISK_NUMBER - Resource is online
00001134.000016c0::2016/12/08-08:06:32.104 INFO  [RES] Physical Disk <san10_vol16>: GET_DISK_NUMBER: Disk number is 15
00000a24.00000c8c::2016/12/08-08:06:32.119 INFO  [DCM] GetDiskGeometry: [Cylinders 802048, Media 12, TracksPerCylinder 255, SectorsPerTrack 63, BytesPerSector 512]
00000a24.00000c8c::2016/12/08-08:06:32.119 WARN  [DCM] IOCTL_STORAGE_QUERY_PROPERTY(StorageDeviceSeekPenaltyProperty): status 1
00000a24.00000c8c::2016/12/08-08:06:32.119 INFO  [DCM] STORAGE_PROPERTY_ID StorageDeviceResiliencyProperty not supported: status 87
00000a24.00000c8c::2016/12/08-08:06:32.119 INFO  [DCM] new VolumeStateChangeRequest: resource 1d554cbe-1803-4de9-a26a-ff6185924810, sequence 14:31
00000a24.00000c8c::2016/12/08-08:06:32.119 INFO  [DCM] processing VolumeStateChangeExRequest: resource 1d554cbe-1803-4de9-a26a-ff6185924810, sequence 14:31
00000a24.00000c8c::2016/12/08-08:06:32.119 INFO  [DCM] Push.AsyncActivateSnapshot for 1d554cbe-1803-4de9-a26a-ff6185924810
00000a24.000029c4::2016/12/08-08:06:32.119 INFO  [DCM] SyncHandler for san10_vol16: action dcm/activateSnapshot, sequence 14:31 ({HOSTNAME})
00000a24.00000f58::2016/12/08-08:06:32.119 INFO  [DCM] Set NfltPauseComplete event for  b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.119 INFO  [DCM] PauseNFilter completed for <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.119 INFO  [DCM] CreatePseudoVolume: remote vbus to \Device\SmbCsv\<local>\CSV$\:{db19d832-b034-46ed-a6c5-61e0ebe370d1}
00000a24.000029c4::2016/12/08-08:06:32.119 INFO  [DCM] CreatePseudoVolume: CREATE_PDO(1): UniqueId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, DeviceNumber 0, DiskNumber 2147483647, PartitionNumber 0, Offset 0, Length 6596932403200. Flags 0x2
00000a24.000029c4::2016/12/08-08:06:32.119 INFO  [DCM] FilterAgent: CREATE_PDO(2): Geometry [Cylinders 802048, Media 12, TracksPerCylinder 255, SectorsPerTrack 63, BytesPerSector 512], SerialNumber 0, TargetVolume \??\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, FSBlob \Device\SmbCsv\<local>\CSV$\:{db19d832-b034-46ed-a6c5-61e0ebe370d1}
00000a24.000029c4::2016/12/08-08:06:32.119 INFO  [DCM] MapVolume: waiting for mount(0): <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00001134.00001410::2016/12/08-08:06:32.229 INFO  [RES] Physical Disk: PNP: \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062} volume arrived
00001134.00001410::2016/12/08-08:06:32.229 INFO  [RES] Physical Disk: PnpAddVolume: Adding volume \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00001134.00001410::2016/12/08-08:06:32.229 INFO  [RES] Physical Disk: PnpAddVolume: Ignoring cluster pseudo volume \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00001134.00001410::2016/12/08-08:06:32.229 INFO  [RES] Physical Disk: PnpAddVolume: Add Volume exit, status 0
00000a24.00000fec::2016/12/08-08:06:32.229 INFO  [DCM] CsvFs Listener: state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, sequence , state CsvFsVolumeStateChangeFromPNP->CsvFsVolumeStatePaused, status 0x0]
00000a24.00000fec::2016/12/08-08:06:32.229 INFO  [DCM] CsvFs Listener: mount reported for volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, state CsvFsVolumeStateInit --> CsvFsVolumeStatePaused
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: Map <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] Filter.Attach CsvFlt (\??\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}\) => : status HrError(0x801f0012)
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] GetSnapshotDeviceName: device name \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] Filter.CsvFltMessageTypeSetVolumeRevisionNumberByDeviceName(Sequence 63, Device \Device\HarddiskVolumeShadowCopy4, Revision=1)
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: SetDownlevel: payload SetDownlevel(1): uniqueId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, target \??\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, diskNumber 0, offset 0, isLocal true, FS target \??\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, directIO true, targetNode 14
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: SetDownlevel(2): partition size 6596932403200, volume size 6596932337664, sector size 512, EnableCOW false, ForceWriteThrough 0, EnableBlockCache false, DiskConnectivityMdsNodeOnly, perfCnt 'snapshot(Volume16:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d)', sequence <14:31><15310>
00000a24.00000fec::2016/12/08-08:06:32.229 INFO  [DCM] CsvFs Listener: state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, sequence <14:31><15310>, state CsvFsVolumeStateChangeFromSetDownlevel->CsvFsVolumeStateSetDownlevel, status 0x0]
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: SetDownlevel completed: volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, status 0
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] GetSnapshotDeviceName: device name \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] SetDownLevelNFilter: volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, target \Device\HarddiskVolumeShadowCopy4
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] SetStorageProperties: <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] MapVolume: volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d mapped in mode MappedDirectIO, desired [DirectIO: true, Maint: false, EnableCOW: false], actual [DirectIO: true, Maint: false, EnableCOW: false], disk 0
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] ActivateNetworkPath b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000fec::2016/12/08-08:06:32.229 INFO  [DCM] CsvFs Listener: CsvFsNotificationBlockPathConnected state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83dstate CsvFsVolumeStateChangeFromSetDownlevel->CsvFsVolumeStateSetDownlevel, status 0x0]
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: ActivateNetworkPath completed: volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, status 0
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] ActivateVolume: <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: ChangeCsvFsState: uniqueId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, state CsvFsVolumeStateActive, sequence <14:31><15311>
00000a24.00000fec::2016/12/08-08:06:32.229 INFO  [DCM] CsvFs Listener: state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, sequence <14:31><15311>, state CsvFsVolumeStateChangeFromActivate->CsvFsVolumeStateActive, status 0x0]
00000a24.00000fec::2016/12/08-08:06:32.229 INFO  [DCM] UnmapPolicy::enter_CountingToGood(00000000-0000-0000-0000-000000000000): goodTimer R1...75, badTimer P0...150, state CountingToGood
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] FilterAgent: Activate completed for <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.229 INFO  [DCM] ActivateVolume: volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d activated in mode ActiveDirectIO
0000044c.00002198::2016/12/08-08:06:32.260 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002198::2016/12/08-08:06:32.260 INFO  [CAM] Token Created, Client Handle: 8001a35c
0000044c.00002198::2016/12/08-08:06:32.276 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002198::2016/12/08-08:06:32.276 INFO  [CAM] Token Created, Client Handle: 800247d0
0000044c.00002198::2016/12/08-08:06:32.308 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002198::2016/12/08-08:06:32.308 INFO  [CAM] Token Created, Client Handle: 80016d8c
00001134.00001ae8::2016/12/08-08:06:32.338 INFO  [RES] Physical Disk: PNP: PnpTriggerCSVMount opened \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] CreateSnapshotSet(exit): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, HrError(0x00000000)
00000a24.00001028::2016/12/08-08:06:32.369 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00001028::2016/12/08-08:06:32.369 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event GetSnapshotProperties
00000a24.00001028::2016/12/08-08:06:32.369 INFO  [DCM] GetSnapshotProperties: snapshotId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00001028::2016/12/08-08:06:32.369 INFO  [DCM] SnapshotApplicationInfo(Context): VSS_CTX_CLIENT_ACCESSIBLE
00000a24.00001028::2016/12/08-08:06:32.369 INFO  [DCM] GetSnapshotInfoFromDevice: snapshot \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00001028::2016/12/08-08:06:32.369 INFO  [DCM] GetSnapshotProperties: 0
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event GetSnapshotProperties
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] GetSnapshotProperties: snapshotId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] SnapshotApplicationInfo(Context): VSS_CTX_CLIENT_ACCESSIBLE
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] GetSnapshotInfoFromDevice: snapshot \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00000c8c::2016/12/08-08:06:32.369 INFO  [DCM] GetSnapshotProperties: 0
00000a24.00001028::2016/12/08-08:06:32.604 INFO  [DCM] WaitForPostOnlineToComplete(enter): resource san10_vol16
00000a24.00001028::2016/12/08-08:06:32.604 INFO  [DCM] HandleSwProviderEvent(enter): resource 'san10_vol16' provider event PostFinalCommitSnapshots
00000a24.00001028::2016/12/08-08:06:32.604 INFO  [DCM] CreateSnapshotSet(enter): snapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2, event PostFinalCommitSnapshots
00000a24.00000e00::2016/12/08-08:06:32.604 INFO  [DCM] SetVolumeSnapshotState: volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\, state DeleteCsvSnapshot, disableCOW false, instanceId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000e00::2016/12/08-08:06:32.604 INFO  [DCM] processing SetSnapshotState: resource 1d554cbe-1803-4de9-a26a-ff6185924810, volume 1d554cbe-1803-4de9-a26a-ff6185924810:135266304, uniqueId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000e00::2016/12/08-08:06:32.604 INFO  [DCM] Push.AsyncHandleSnapshot for 1d554cbe-1803-4de9-a26a-ff6185924810
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] SyncHandler for san10_vol16: action dcm/setSnapshotState, sequence b3a2e432-40e2-4d2e-bb17-1b0179a7d83d ({HOSTNAME})
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] SetSnapshotStateHandler(enter): volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\ state DeleteCsvSnapshot
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] Deleting snapshot pseudo volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000f58::2016/12/08-08:06:32.604 INFO  [DCM] Set NfltPauseComplete event for  b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] PauseNFilter completed for <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] CleanupNFilter <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] FilterAgent: ChangeCsvFsState: uniqueId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, state CsvFsVolumeStateInit, sequence <14:31><15312>
00000a24.00000fec::2016/12/08-08:06:32.604 INFO  [DCM] CsvFs Listener: state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, sequence <14:31><15312>, state CsvFsVolumeStateChangeFromInit->CsvFsVolumeStateInit, status 0x0]
00000a24.00000fec::2016/12/08-08:06:32.604 INFO  [DCM] CsvFs Listener: state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, sequence , state CsvFsVolumeStateChangeFromSurpriseRemoval->CsvFsVolumeStateSurpriseRemoved, status 0x0]
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] DeleteSnapshotPseudoVolume(exit): snapshot b3a2e432-40e2-4d2e-bb17-1b0179a7d83d status 0
00000a24.000029c4::2016/12/08-08:06:32.604 INFO  [DCM] SetSnapshotStateHandler(no-op): volume \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\ status 0
00001134.00001410::2016/12/08-08:06:32.604 INFO  [RES] Physical Disk: PNP: \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062} volume disappeared
00001134.00001410::2016/12/08-08:06:32.604 INFO  [RES] Physical Disk: PnpRemoveVolume: Removing volume \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00001134.00001410::2016/12/08-08:06:32.604 INFO  [RES] Physical Disk: PnpRemoveVolume: starting CSV removal notify thread for \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00001134.00001410::2016/12/08-08:06:32.604 INFO  [RES] Physical Disk: PnpRemoveVolume: volume removal: ignoring cluster pseudo volume \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00001134.0000134c::2016/12/08-08:06:32.604 INFO  [RES] Physical Disk: PnpNotifyCSVRemoval: device \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00000a24.00000c8c::2016/12/08-08:06:32.604 INFO  [RCM] PreProcessControl CLUSCTL_RESOURCE_STORAGE_CSV_UPDATE_INFO for 'san10_vol16'
00000a24.00000c8c::2016/12/08-08:06:32.604 INFO  [DCM] UpdateVolumeInfo: resource 'san10_vol16'
00000a24.00000c8c::2016/12/08-08:06:32.604 INFO  [DCM] UpdateVolumeInfo: volume removal for b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00000ea4::2016/12/08-08:06:32.776 INFO  [GEM] Node 14: Deleting [5:4317 , 5:4317] (both included) as it has been ack'd by every node
0000044c.00001840::2016/12/08-08:06:33.510 INFO  [CAM] Token Created, Client Handle: 800021b8
0000044c.00001840::2016/12/08-08:06:33.541 INFO  [CAM] Token Created, Client Handle: 8001ca6c
00000a24.00001a94::2016/12/08-08:06:34.135 INFO  [GEM] Node 14: Deleting [6:57514 , 6:57514] (both included) as it has been ack'd by every node
00000a24.00000f4c::2016/12/08-08:06:34.151 INFO  [GEM] Node 14: Deleting [8:29750 , 8:29750] (both included) as it has been ack'd by every node
00000a24.000014ac::2016/12/08-08:06:34.166 INFO  [GEM] Node 14: Deleting [9:5161 , 9:5161] (both included) as it has been ack'd by every node
00000a24.00000f54::2016/12/08-08:06:34.166 INFO  [GEM] Node 14: Deleting [7:485 , 7:485] (both included) as it has been ack'd by every node
00000a24.00000e70::2016/12/08-08:06:34.182 INFO  [GEM] Node 14: Deleting [4:4161 , 4:4162] (both included) as it has been ack'd by every node
00000a24.00000e6c::2016/12/08-08:06:34.182 INFO  [GEM] Node 14: Deleting [1:59141 , 1:59148] (both included) as it has been ack'd by every node
00000a24.00000f04::2016/12/08-08:06:34.182 INFO  [GEM] Node 14: Deleting [10:2214 , 10:2224] (both included) as it has been ack'd by every node
00000a24.00000e74::2016/12/08-08:06:34.198 INFO  [GEM] Node 14: Deleting [11:40968 , 11:40969] (both included) as it has been ack'd by every node
00000a24.00000e88::2016/12/08-08:06:34.198 INFO  [GEM] Node 14: Deleting [3:27248 , 3:27248] (both included) as it has been ack'd by every node
00000a24.00002a04::2016/12/08-08:06:34.213 INFO  [GEM] Node 14: Deleting [12:3050 , 12:3051] (both included) as it has been ack'd by every node
00000a24.00000f10::2016/12/08-08:06:34.229 INFO  [GEM] Node 14: Deleting [2:63728 , 2:63729] (both included) as it has been ack'd by every node
00000a24.00000ea4::2016/12/08-08:06:34.260 INFO  [GEM] Node 14: Deleting [5:4318 , 5:4319] (both included) as it has been ack'd by every node
00000a24.00000e8c::2016/12/08-08:06:34.276 INFO  [GEM] Node 14: Deleting [13:33616 , 13:33617] (both included) as it has been ack'd by every node
00000a24.000026c0::2016/12/08-08:06:34.354 INFO  [RCM [RES] Virtual Machine {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.00002b68::2016/12/08-08:06:34.354 INFO  [RCM [RES] Virtual Machine {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.0000065c::2016/12/08-08:06:34.354 INFO  [RCM [RES] SCVMM {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.000026c0::2016/12/08-08:06:34.354 INFO  [RCM [RES] Virtual Machine {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
00000a24.0000240c::2016/12/08-08:06:34.354 INFO  [RCM [RES] {VM NAME} embedded failure notification, code=0 _isEmbeddedFailure=false _embeddedFailureAction=2
0000044c.00001840::2016/12/08-08:06:34.885 INFO  [CAM] Token Created, Client Handle: 80017f28
0000044c.00001840::2016/12/08-08:06:35.010 INFO  [CAM] Token Created, Client Handle: 8001c96c
0000044c.00001840::2016/12/08-08:06:35.119 INFO  [CAM] Token Created, Client Handle: 80013c48
00000a24.00000e70::2016/12/08-08:06:35.182 INFO  [GEM] Node 14: Deleting [4:4163 , 4:4165] (both included) as it has been ack'd by every node
0000044c.00001840::2016/12/08-08:06:35.213 INFO  [CAM] Token Created, Client Handle: 800246a4
00000a24.00000f04::2016/12/08-08:06:35.213 INFO  [GEM] Node 14: Deleting [10:2225 , 10:2234] (both included) as it has been ack'd by every node
0000044c.00001840::2016/12/08-08:06:35.338 INFO  [CAM] Token Created, Client Handle: 800199b4
00000a24.00000f4c::2016/12/08-08:06:35.416 INFO  [GEM] Node 14: Deleting [8:29751 , 8:29753] (both included) as it has been ack'd by every node
0000044c.00001840::2016/12/08-08:06:35.448 INFO  [CAM] Token Created, Client Handle: 8000c18c
00000a24.000014ac::2016/12/08-08:06:35.666 INFO  [GEM] Node 14: Deleting [9:5162 , 9:5164] (both included) as it has been ack'd by every node
00000a24.00000f54::2016/12/08-08:06:35.854 INFO  [GEM] Node 14: Deleting [7:486 , 7:488] (both included) as it has been ack'd by every node
00000a24.00001a94::2016/12/08-08:06:36.166 INFO  [GEM] Node 14: Deleting [6:57515 , 6:57517] (both included) as it has been ack'd by every node
0000044c.00001840::2016/12/08-08:06:36.182 INFO  [CAM] Token Created, Client Handle: 800240b4
0000044c.00001840::2016/12/08-08:06:36.291 INFO  [CAM] Token Created, Client Handle: 800175a0
00000a24.00002a04::2016/12/08-08:06:36.385 INFO  [GEM] Node 14: Deleting [12:3052 , 12:3057] (both included) as it has been ack'd by every node
00000a24.00000f04::2016/12/08-08:06:36.448 INFO  [GEM] Node 14: Deleting [10:2235 , 10:2242] (both included) as it has been ack'd by every node
00000a24.00000e88::2016/12/08-08:06:36.619 INFO  [GEM] Node 14: Deleting [3:27249 , 3:27251] (both included) as it has been ack'd by every node
00000a24.00000e74::2016/12/08-08:06:36.963 INFO  [GEM] Node 14: Deleting [11:40970 , 11:40972] (both included) as it has been ack'd by every node
00000a24.00000f10::2016/12/08-08:06:37.401 INFO  [GEM] Node 14: Deleting [2:63730 , 2:63732] (both included) as it has been ack'd by every node
00000a24.00000f04::2016/12/08-08:06:37.448 INFO  [GEM] Node 14: Deleting [10:2243 , 10:2255] (both included) as it has been ack'd by every node
00000a24.00000e8c::2016/12/08-08:06:37.932 INFO  [GEM] Node 14: Deleting [13:33618 , 13:33623] (both included) as it has been ack'd by every node
00000a24.00000ea4::2016/12/08-08:06:38.088 INFO  [GEM] Node 14: Deleting [5:4320 , 5:4322] (both included) as it has been ack'd by every node
00001134.000016c0::2016/12/08-08:06:38.573 INFO  [RES] Physical Disk: HarddiskpIsPartitionHidden: device \Device\Harddisk15\ClusterPartition2 8000000000000000
00000a24.00000f10::2016/12/08-08:06:38.588 INFO  [GEM] Node 14: Deleting [2:63733 , 2:63735] (both included) as it has been ack'd by every node
00000a24.00000f04::2016/12/08-08:06:38.635 INFO  [GEM] Node 14: Deleting [10:2256 , 10:2265] (both included) as it has been ack'd by every node
00000a24.00000c8c::2016/12/08-08:06:38.729 INFO  [NM] Received request from client address {HOSTNAME}.
00001134.00001944::2016/12/08-08:06:38.760 INFO  [RES] Physical Disk: HarddiskpIsPartitionHidden: device \Device\Harddisk15\ClusterPartition2 8000000000000000
00000a24.000026f4::2016/12/08-08:06:38.776 INFO  [DCM] GetCsvVolumeMountpoints: resource 'san10_vol16' outBufSize 250
00000a24.000026f4::2016/12/08-08:06:38.776 INFO  [DCM] GetCsvVolumeMountpoints: resource 'san10_vol16' not in MM \\?\Volume{0467edbf-2759-41da-b9e1-5227480ff6f7}\
00000a24.000026f4::2016/12/08-08:06:38.776 INFO  [DCM] GetCsvVolumeMountpoints: resource 'san10_vol16' partition '2' returns 58 bytes and status 0
00000a24.00000e74::2016/12/08-08:06:38.869 INFO  [GEM] Node 14: Deleting [11:40973 , 11:40975] (both included) as it has been ack'd by every node
00000a24.00000e70::2016/12/08-08:06:39.182 INFO  [GEM] Node 14: Deleting [4:4166 , 4:4168] (both included) as it has been ack'd by every node
00000a24.00000ea4::2016/12/08-08:06:39.369 INFO  [GEM] Node 14: Deleting [5:4323 , 5:4325] (both included) as it has been ack'd by every node
0000044c.00002198::2016/12/08-08:06:40.963 INFO  [CAM] Token Created, Client Handle: 80001754
0000044c.00002344::2016/12/08-08:06:43.213 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002344::2016/12/08-08:06:43.213 INFO  [CAM] Token Created, Client Handle: 80023468
0000044c.00002344::2016/12/08-08:06:43.276 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002344::2016/12/08-08:06:43.276 INFO  [CAM] Token Created, Client Handle: 80020cf8
0000044c.00002344::2016/12/08-08:06:44.494 INFO  [CAM] Substituting Token Owner: BUILTIN\Administrators, Original: NT AUTHORITY\SYSTEM
0000044c.00002344::2016/12/08-08:06:44.494 INFO  [CAM] Token Created, Client Handle: 8001b760
00000a24.00001028::2016/12/08-08:06:44.530 INFO  [DCM] Provider::CreateSnapshotSet(enter): Event PostFinalCommitSnapshots, SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2
00000a24.00001028::2016/12/08-08:06:44.530 INFO  [DCM] DeleteProviderInterface()
00000a24.00001028::2016/12/08-08:06:44.530 INFO  [DCM] Provider::CreateSnapshotSet(exit): SnapshotSet 8f4d19fc-b892-4df3-bcfb-f2452ea92bb2 HrError(0x00000000)
00000a24.00001028::2016/12/08-08:06:44.530 INFO  [DCM] SetSnapshotAutoRecoverAttribute: device \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, readOnly SetSnapshotReadOnly::Yes
00000a24.00001028::2016/12/08-08:06:44.541 INFO  [DCM] SetSnapshotSetAutoRecovery: snapshot 2e6f6710-bcaf-11e6-80d5-98be943702d0, HrError(0x00000000)
00000a24.00001028::2016/12/08-08:06:44.541 INFO  [DCM] Sending revision list. Message id 14:11
00000a24.00001028::2016/12/08-08:06:44.541 INFO  [DCM] Sending revision list completed on all nodes. Message id 14:11
00000a24.00001028::2016/12/08-08:06:44.541 INFO  [DCM] GetSnapshotDeviceName: device name \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00001028::2016/12/08-08:06:44.666 INFO  [DCM] FilterAttach hr HrError(0x00000000) device \Device\HarddiskVolumeShadowCopy4
00000a24.00001028::2016/12/08-08:06:44.666 INFO  [DCM] Filter.CsvFltMessageTypeSetVolumeRevisionNumberByDeviceName(Sequence 64, Device \Device\HarddiskVolumeShadowCopy4, Revision=2)
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] ClRtlCreateNetShare status 0 shareName b3a2e432-40e2-4d2e-bb17-1b0179a7d83d$, sharePath \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}\
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] SetCsvAppInfoGuid(): snapshot \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00000f58::2016/12/08-08:06:44.682 INFO  [DCM] Set NfltPauseComplete event for  b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] PauseNFilter completed for <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] GetSnapshotDeviceName: device name \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, status 0
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] SetDownLevelNFilter: volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, target \Device\HarddiskVolumeShadowCopy4
00001134.00001944::2016/12/08-08:06:44.682 INFO  [RES] Physical Disk <san10_vol16>: GET_DISK_NUMBER - Resource is online
00001134.00001944::2016/12/08-08:06:44.682 INFO  [RES] Physical Disk <san10_vol16>: GET_DISK_NUMBER: Disk number is 15
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] GetDiskGeometry: [Cylinders 802048, Media 12, TracksPerCylinder 255, SectorsPerTrack 63, BytesPerSector 512]
00000a24.00001028::2016/12/08-08:06:44.682 WARN  [DCM] IOCTL_STORAGE_QUERY_PROPERTY(StorageDeviceSeekPenaltyProperty): status 1
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] STORAGE_PROPERTY_ID StorageDeviceResiliencyProperty not supported: status 87
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] new VolumeStateChangeRequest: resource 1d554cbe-1803-4de9-a26a-ff6185924810, sequence 14:32
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] processing VolumeStateChangeExRequest: resource 1d554cbe-1803-4de9-a26a-ff6185924810, sequence 14:32
00000a24.00001028::2016/12/08-08:06:44.682 INFO  [DCM] Push.AsyncActivateSnapshot for 1d554cbe-1803-4de9-a26a-ff6185924810
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] SyncHandler for san10_vol16: action dcm/activateSnapshot, sequence 14:32 ({HOSTNAME})
00000a24.00000f58::2016/12/08-08:06:44.682 INFO  [DCM] Set NfltPauseComplete event for  b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] PauseNFilter completed for <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] CreatePseudoVolume: remote vbus to \Device\SmbCsv\<local>\CSV$\:{db19d832-b034-46ed-a6c5-61e0ebe370d1}
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] CreatePseudoVolume: CREATE_PDO(1): UniqueId b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, DeviceNumber 0, DiskNumber 2147483647, PartitionNumber 0, Offset 0, Length 6596932403200. Flags 0x3
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] FilterAgent: CREATE_PDO(2): Geometry [Cylinders 802048, Media 12, TracksPerCylinder 255, SectorsPerTrack 63, BytesPerSector 512], SerialNumber 0, TargetVolume \??\GLOBALROOT\Device\HarddiskVolumeShadowCopy{2e6f6710-bcaf-11e6-80d5-98be943702d0}, FSBlob \Device\SmbCsv\<local>\CSV$\:{db19d832-b034-46ed-a6c5-61e0ebe370d1}
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] MapVolume: waiting for mount(0): <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00001134.00001410::2016/12/08-08:06:44.682 INFO  [RES] Physical Disk: PNP: \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062} volume arrived
00001134.00001410::2016/12/08-08:06:44.682 INFO  [RES] Physical Disk: PnpAddVolume: Adding volume \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00000a24.00000fec::2016/12/08-08:06:44.682 INFO  [DCM] CsvFs Listener: state [volume b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, sequence , state CsvFsVolumeStateChangeFromPNP->CsvFsVolumeStatePaused, status 0x0]
00000a24.00000fec::2016/12/08-08:06:44.682 INFO  [DCM] CsvFs Listener: mount reported for volume <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d, state CsvFsVolumeStateInit --> CsvFsVolumeStatePaused
00000a24.000029c4::2016/12/08-08:06:44.682 INFO  [DCM] FilterAgent: Map <snapshot>:b3a2e432-40e2-4d2e-bb17-1b0179a7d83d
00001134.00001410::2016/12/08-08:06:44.682 INFO  [RES] Physical Disk: PnpAddVolume: Ignoring cluster pseudo volume \\?\STORAGE#CSVVolume#{b3a2e432-40e2-4d2e-bb17-1b0179a7d83d}#{7f108a28-9833-4b3b-b780-2c6b5fa5c062}
00001134.00001410::2016/12/08-08:06:44.682 INFO  [RES] Physical Disk: PnpAddVolume: Add Volume exit, status 0


This correlates with the event id 833 that we're seeing on a SQL server that sits on the above "san10_vol16".

Code: Select all
Log Name:      Application
Source:        {DATABASE NAME}
Date:          8/12/2016 8:06:41 a.m.
Event ID:      833
Task Category: Server
Level:         Information
Keywords:      Classic
User:          N/A
Computer:      {VM NAME}
Description:
SQL Server has encountered 5 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\path\to\database\name.ldf] in database id 7.  The OS file handle is 0x00000000000009D8.  The offset of the latest long I/O is: 0x00000029cf1000
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="{DATABASE NAME}" />
    <EventID Qualifiers="16384">833</EventID>
    <Level>4</Level>
    <Task>2</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2016-12-07T19:06:41.000000000Z" />
    <EventRecordID>150029</EventRecordID>
    <Channel>Application</Channel>
    <Computer>{COMPUTER NAME}</Computer>
    <Security />
  </System>
  <EventData>
    <Data>5</Data>
    <Data>15</Data>
    <Data>C:\path\to\database\name.ldf</Data>
    <Data>7</Data>
    <Data>00000000000009D8</Data>
    <Data>0x00000029cf1000</Data>
    <Binary>410300000A000000150000004800570052004C0059004E00430046004500300031005C005200540043004C004F00430041004C00000000000000</Binary>
  </EventData>
</Event>


We have also run some performance counters during backups and we can definitely see the write latency and disk queue lengths on the CSV spike for 30 seconds or less when the Veeam CSV snapshot is initiated.

The question we have is - is this expected behavior? Should guest VM's suffer from IO issues when the CSV is being snapshotted?


Thanks!
Pter.
pterpumpkin
Influencer
 
Posts: 11
Liked: 1 time
Joined: Tue Jun 14, 2016 9:36 am
Full Name: Pter Pumpkin

Re: CSV IO high latency/queues when running backups

Veeam Logoby Vitaliy S. » Fri Dec 09, 2016 11:42 am

Hi Pter,

The load on the backed up VM and the datastore which hosts this VM is expected, that's why it is recommended to do backup outside the business hours window. As regards your particular situation, I would first check what else is consuming datastore resources and would probably try to optimize (re-group) VMs on the datastore.

Thanks!
Vitaliy S.
Veeam Software
 
Posts: 19558
Liked: 1102 times
Joined: Mon Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov

Re: CSV IO high latency/queues when running backups

Veeam Logoby nmdange » Fri Dec 09, 2016 11:08 pm

This is one of the reasons why Microsoft introduced the new resilient change tracking feature into Hyper-V 2016 which no longer requires VSS snapshots on the CSVs.
nmdange
Expert
 
Posts: 193
Liked: 57 times
Joined: Thu Aug 20, 2015 9:30 pm

Re: CSV IO high latency/queues when running backups

Veeam Logoby pterpumpkin » Mon Dec 12, 2016 8:31 pm

Thanks guys!

How do other people deal with this issue and not cause interruption to servers? We need to run backups/replication jobs during business hours.
pterpumpkin
Influencer
 
Posts: 11
Liked: 1 time
Joined: Tue Jun 14, 2016 9:36 am
Full Name: Pter Pumpkin

Re: CSV IO high latency/queues when running backups

Veeam Logoby Vitaliy S. » Mon Dec 12, 2016 8:46 pm

One thing that people do is change default timeouts (in case of clusters) or run less concurrent tasks against the same datastore. I would also consider looking at Hyper-V 2016 as nmdange advises.
Vitaliy S.
Veeam Software
 
Posts: 19558
Liked: 1102 times
Joined: Mon Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov

Re: CSV IO high latency/queues when running backups

Veeam Logoby pterpumpkin » Mon Dec 12, 2016 8:50 pm

Thanks Vitaliy!

Are you able to provide more specific details around the default timeout's your referring to? Are these Veeam timeouts, or cluster/csv timeouts?
pterpumpkin
Influencer
 
Posts: 11
Liked: 1 time
Joined: Tue Jun 14, 2016 9:36 am
Full Name: Pter Pumpkin

Re: CSV IO high latency/queues when running backups

Veeam Logoby Vitaliy S. » Wed Dec 14, 2016 5:03 pm

Pter, the specifics depend on the exact issue you're observing. If you search these forums you will see timeout settings for SQL Server Clusters and AlwaysOn deployments. I would recommend to search for the recommendation that will help in your case based on the Windows Event log entries (exact IDs of the events you see).
Vitaliy S.
Veeam Software
 
Posts: 19558
Liked: 1102 times
Joined: Mon Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov

Re: CSV IO high latency/queues when running backups

Veeam Logoby pterpumpkin » Wed Dec 14, 2016 7:13 pm

Thanks Vitaliy.

My question was more geared towards where you were suggesting to change the timeout setting(s) e.g. CSV timeouts, SQL timeouts, cluster timeouts, iSCSI timeouts etc... But i see now you were referring to SQL timeouts :)
pterpumpkin
Influencer
 
Posts: 11
Liked: 1 time
Joined: Tue Jun 14, 2016 9:36 am
Full Name: Pter Pumpkin

Re: CSV IO high latency/queues when running backups

Veeam Logoby Vitaliy S. » Wed Dec 14, 2016 7:28 pm

Yes, you cannot change storage, hardware behavior during I/O intensive operations, but you can adjust timeouts for the applications that are suffering from low I/O performance.
Vitaliy S.
Veeam Software
 
Posts: 19558
Liked: 1102 times
Joined: Mon Mar 30, 2009 9:13 am
Full Name: Vitaliy Safarov

Re: CSV IO high latency/queues when running backups

Veeam Logoby jdixon » Fri Dec 30, 2016 2:39 am

I have the exact same problem. I verified it isn't my SAN that is the issue and it appears it isn't my fiber links.. so I think its the HBA. We have a 3PAR 12Gbps SAS all SSD SAN with four 8Gbps fiber paths from switch to SAN and two fiber paths from switch to servers.

HP verified it isn't the SAN so i'm really at a loss as to what it is.
jdixon
Novice
 
Posts: 5
Liked: 1 time
Joined: Wed Nov 23, 2016 4:30 pm
Full Name: Jacob Dixon

Re: CSV IO high latency/queues when running backups

Veeam Logoby pterpumpkin » Fri Dec 30, 2016 3:47 am

Hi jdixon,

From what I now understand, this is expected behaviour of a CSV.

I have read that this will be resolved in server 2016, but we're yet to confirm this.

I believe that smaller CSVs will be less affected as the CSV snapshot will be quicker.

Do off host proxy hardware based backups take a snapshot of the csv? If not, this could also be another fix.



Pter.
pterpumpkin
Influencer
 
Posts: 11
Liked: 1 time
Joined: Tue Jun 14, 2016 9:36 am
Full Name: Pter Pumpkin

Re: CSV IO high latency/queues when running backups

Veeam Logoby foggy » Fri Dec 30, 2016 12:16 pm

pterpumpkin wrote:Do off host proxy hardware based backups take a snapshot of the csv? If not, this could also be another fix.

Yes, in this case snapshot is mounted onto an offhost proxy.
foggy
Veeam Software
 
Posts: 14742
Liked: 1080 times
Joined: Mon Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson

Re: CSV IO high latency/queues when running backups

Veeam Logoby RWeiss » Tue Jan 31, 2017 2:12 pm

We are having the same issue and from what I've read, the issue is that when using Software VSS, CSV switches basically to redirected I/O when VMs are backed up that are not on the owner node. Without a hardware VSS, I do not believe there is a way to reduce the impact.
RWeiss
Influencer
 
Posts: 19
Liked: 1 time
Joined: Thu Nov 13, 2014 10:29 am

Re: CSV IO high latency/queues when running backups

Veeam Logoby nmdange » Tue Jan 31, 2017 4:11 pm

RWeiss wrote:We are having the same issue and from what I've read, the issue is that when using Software VSS, CSV switches basically to redirected I/O when VMs are backed up that are not on the owner node. Without a hardware VSS, I do not believe there is a way to reduce the impact.


If you use a scale-out file server cluster to present the storage to Hyper-V over SMB, then that would eliminate redirected I/O.
nmdange
Expert
 
Posts: 193
Liked: 57 times
Joined: Thu Aug 20, 2015 9:30 pm

Re: CSV IO high latency/queues when running backups

Veeam Logoby jaredplaisted » Thu Feb 02, 2017 9:23 pm

Just a quick question how many VM's per CSV do you have?

Sharing something we encountered about 2 years ago when we first started using Hyper-V 2012 R2 which maybe relevant.

We used to have similar issues to what you describe, back then we had large CSV's (3PAR) with approximately 20-30 vms per CSV and experienced latency and VM freezing temporarily when ever backups ran, even if backing up a single VM every other VM in the same CSV would get hit bythe issue, regardless of throttling limits, software or hardware vss provider the issue persisted. This freezing/latency would only occur during the first part of the Backup job while the snapshot is being created, after the snapshot was created and the backup began reading/transferring data performance would return to normal.

Since we re-architected our CSV's and reduced the number of VM's per CSV down to 5-10 we no longer encounter the issue, we can run mutliple concurrent backups across multiple CSV's and don't experience any latency or issues now.
jaredplaisted
Service Provider
 
Posts: 3
Liked: never
Joined: Mon Dec 31, 2012 1:16 am
Full Name: Jared Plaisted

Next

Return to Microsoft Hyper-V



Who is online

Users browsing this forum: No registered users and 4 guests