-
- Influencer
- Posts: 14
- Liked: 1 time
- Joined: Mar 17, 2014 11:06 am
- Full Name: Dave Hamer
- Contact:
Bored of Ghost Disks
One of our clients is having major issues with Ghost disks - left behind by Veeam after a job is cancelled, interrupted, or fails.
This VM is supposed to have 3 disks...
Because of this, the replicas then fail because there is an "invalid snapshot configuration" because the disks are attached to another machine.
I know that ideally we would find the cause of the failing jobs, however some jobs simply run far too long and we have to cancel them.
Veeam is 8.0.0.917 and VMWare is 5.5U2
I thought after reading this article: http://www.veeam.com/blog/8-gems-in-vee ... unter.html I thought that Veeam had put a feature in to stop this, but this doesn't seem to be working. Surely you guys know what configuration a machine was in before a backup started and are able to put the same configuration back afterwards? It's thoroughly frustrating!
Any input appreciated,
Dave
This VM is supposed to have 3 disks...
Because of this, the replicas then fail because there is an "invalid snapshot configuration" because the disks are attached to another machine.
I know that ideally we would find the cause of the failing jobs, however some jobs simply run far too long and we have to cancel them.
Veeam is 8.0.0.917 and VMWare is 5.5U2
I thought after reading this article: http://www.veeam.com/blog/8-gems-in-vee ... unter.html I thought that Veeam had put a feature in to stop this, but this doesn't seem to be working. Surely you guys know what configuration a machine was in before a backup started and are able to put the same configuration back afterwards? It's thoroughly frustrating!
Any input appreciated,
Dave
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Bored of Ghost Disks
Hi Dave,
Yes, you've correctly stated that original issue should be investigated deeper if you want to avoid these situations. As I correctly understand your configuration, you're using hotadd proxy servers, right? Have you considered switching to network mode, until you find the reason for disks not being removed from the proxy server?
P.S. As regards snapshot hunter feature, then this should detect snapshot on your source VMs and then try to consolidate it, this cannot re-attach disks back to the original VM.
Thanks!
Yes, you've correctly stated that original issue should be investigated deeper if you want to avoid these situations. As I correctly understand your configuration, you're using hotadd proxy servers, right? Have you considered switching to network mode, until you find the reason for disks not being removed from the proxy server?
P.S. As regards snapshot hunter feature, then this should detect snapshot on your source VMs and then try to consolidate it, this cannot re-attach disks back to the original VM.
Thanks!
-
- Chief Product Officer
- Posts: 31814
- Liked: 7302 times
- Joined: Jan 01, 2006 1:01 am
- Location: Baar, Switzerland
- Contact:
Re: Bored of Ghost Disks
Vitaliy is correct - the above issue has nothing to deal with snapshots, so Snapshot Hunter functionality is irrelevant here. This needs to be investigated closer with support. Is this a proxy VM on your screenshot, by any chance? Thanks.
-
- Influencer
- Posts: 14
- Liked: 1 time
- Joined: Mar 17, 2014 11:06 am
- Full Name: Dave Hamer
- Contact:
Re: Bored of Ghost Disks
Hi Chaps,
Thanks for the responses, another morning, another 8 ghost disks... Yep - this is indeed a ProxyVM, there are actually 3-4 but this one seems to be getting punished the most at the moment
Last night the backup job failed with the following errors:
This happened because Veeam had somehow registered two of each replica, so yesterday I had to manually clear up all of the replicas, and apparently the mappings were incorrect. However, it is interesting that this was the failure that caused the ghost disks - is there a bug in the cleanup routine of a failed job due to mapping targets?
Thanks for the responses, another morning, another 8 ghost disks... Yep - this is indeed a ProxyVM, there are actually 3-4 but this one seems to be getting punished the most at the moment
Last night the backup job failed with the following errors:
This happened because Veeam had somehow registered two of each replica, so yesterday I had to manually clear up all of the replicas, and apparently the mappings were incorrect. However, it is interesting that this was the failure that caused the ghost disks - is there a bug in the cleanup routine of a failed job due to mapping targets?
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Bored of Ghost Disks
I haven't seen any similar cases with v8, so first of all, I would make sure you're have all latest updates installed to your vSphere infrastructure, and then switch to network processing mode while investigating this issue with technical support team. Thanks!DaveBristolIT wrote:However, it is interesting that this was the failure that caused the ghost disks - is there a bug in the cleanup routine of a failed job due to mapping targets?
-
- Expert
- Posts: 223
- Liked: 15 times
- Joined: Jul 02, 2009 8:26 pm
- Full Name: Jim
- Contact:
Re: Bored of Ghost Disks
I, too, am having lots of trouble with "ghost" disks, and only since V8 Patch 2. In my case I'm replicating 2 VMs across a 50Mb fiber wan... a configuration that has worked well for a couple of years. But after upgrading to patch2 I can't get this particular job stable.
It starts with the replication job failing (after several nights of good replications) with "Error: Failed to open VDDK disk [[datastore1] arena1_replica_mid/arena1-000007.vmdk] ( is read-only mode - [false] ) Failed to open virtual disk Logon attempt with parameters [VC/ESX: [vcentervm.mydomain.com];Port: 443;Login: [root@localos];VMX Spec: [moref=vm-19505];Snapshot mor: [snapshot-20408];Transports: [hotadd:nbd];Read Only: [false]] failed because of the following errors: Failed to open disk for write. Failed to download disk. Reconnectable protocol device was closed. Failed to upload disk."
And then subsequent attempts of the same job fail with: "Processing arena1 Error: Detected an invalid snapshot configuration. Processing Gringotts-ACS Error: A general system error occurred: Failed to lock the file" and when I look at the proxy VM at the replica site it has both drives for arena1 and Gringotts-ACS still mounted and I have to disconnect them from the proxy manually.
If I delete everything on the replication datastore and start the job fresh again, it works for a few nights and then the same loop happens.
YES, I will open a case with support, but wanted to put this here as it seems very similar to the OPs issue. Something about patch2 changed the way my replication jobs work over the wan, and they ain't happy.
It starts with the replication job failing (after several nights of good replications) with "Error: Failed to open VDDK disk [[datastore1] arena1_replica_mid/arena1-000007.vmdk] ( is read-only mode - [false] ) Failed to open virtual disk Logon attempt with parameters [VC/ESX: [vcentervm.mydomain.com];Port: 443;Login: [root@localos];VMX Spec: [moref=vm-19505];Snapshot mor: [snapshot-20408];Transports: [hotadd:nbd];Read Only: [false]] failed because of the following errors: Failed to open disk for write. Failed to download disk. Reconnectable protocol device was closed. Failed to upload disk."
And then subsequent attempts of the same job fail with: "Processing arena1 Error: Detected an invalid snapshot configuration. Processing Gringotts-ACS Error: A general system error occurred: Failed to lock the file" and when I look at the proxy VM at the replica site it has both drives for arena1 and Gringotts-ACS still mounted and I have to disconnect them from the proxy manually.
If I delete everything on the replication datastore and start the job fresh again, it works for a few nights and then the same loop happens.
YES, I will open a case with support, but wanted to put this here as it seems very similar to the OPs issue. Something about patch2 changed the way my replication jobs work over the wan, and they ain't happy.
-
- VP, Product Management
- Posts: 27377
- Liked: 2800 times
- Joined: Mar 30, 2009 9:13 am
- Full Name: Vitaliy Safarov
- Contact:
Re: Bored of Ghost Disks
Hi Jim, thanks for posting to existing topic, please let me know your case ID, so that I could update this topic with a resolution later.
Who is online
Users browsing this forum: No registered users and 19 guests