Comprehensive data protection for all workloads
Post Reply
stevenrodenburg1
Expert
Posts: 135
Liked: 20 times
Joined: May 31, 2011 9:11 am
Full Name: Steven Rodenburg
Location: Switzerland
Contact:

B&R 5.02 - FLR not possible due to NFS Mount problem

Post by stevenrodenburg1 »

Hello all,

I was at a customer yesterday who are unable to do FLR restores. They choose "Other OS" to be able to access GPT Partitions in very large file servers (Other OS is needed because the normal wizzard does not understand GPT Disks).
This works fine everywhere but not at this customer.

The problem lies with mounting the NFS Datastore to the Veeam VM. They have general problems in this area and as a consequence, FLR in Appliance-mode does not work either.

9/10 times when they try to do a FLR or an Instant Restore, they get errors saying that the datastore already exists.

It's a Swiss (German) System. The error popup comes in German but translated it means that the name "VeeamBackup_SSZH0037" already exists.
That is the name of the NFS DataStore ESX tries to mount on the Veeam B&R Virtual Host called SSZH0037.

There has been a ticket (5139938) open since the 18th of July but Veeam Support in Russia is somehow unable to solve it. All they do is ask the logs, over and over again, endlessly, but not getting anywhere. That's the very reason i proposed the customer yesterday to try a different channel (this Forum).

What i suspect is that the problem is not entirely VEEAM's. I think that the vCenter Database already contains that datastore-name "VeeamBackup_SSZH0037" somewhere (It's just not visible).
The reason i say this is that VEEAM does not tell vSphere to remove NFS Datastores it just used for a FLR or Instant restore. They stay behind.
Sometimes, a later FLR of Instant restore job re-uses that, still mounted, NFS Datastore succesfully but a lot of the time, you get those "new" datastores with the same name + (1) (2) (3) etc. etc. behind it.

At this customer, vCenter always bombs with an error message when trying to remove such a datastore (regardless the name). It almost never works.
The only way to get (at least visually) rid of it is by disconnecting the "stuck" ESX Server, open up a second vi-client and connecting directly to the ESX Server that has the old Veeam NFS Mount and delete it from there.
Then close the directly connected vi-client and reconnect that ESX in vCenter. Now the view of the datastores is updates and the NFS Mount is gone from vCenter too.
That is, visually gone, but it feels like those datastore names are still lurking around in the vCenter database somewhere.

Because those NFS Mounts still (partially?) exist in the vCenter Database, Veeam get's in trouble when it wants to deploy FLR Appliances or Instant-Restores later on because it want's to mount them with the same names as in the past.


My first question is: Am i right in assuming that the current problems might come from a "stuck" entry in the vCenter database and if so, how do we clean it up?
Related question: how to prevent it from happening again after everything is clean again and working?
It's a full-blown production environment so hacking around in the database, or rebuilding vCenter is not an option.


Second question: this behaviour of Veeam (or vSphere, cannot judge) of not cleaning up (dismounting) it's NFS Mounts is a major pain in the ass. It causes all kinds of problems. In Veeam Monitor too for example, where they show up as dead entries (ghosted, greyed out) which require manual removal. In large environments this causes a lot of manual labor.


I hope that we (customer and me) can get some usefull help here. I don't know what's going down in the Russian support center and i do not want to judge about anyone. But the customer is getting aggravated as it's bin well over a month now of not being able to restore individual files from GPT Disks (don't tell the customer he must stop using GPT, he has his reasons and migrating to MBR is not a realistic/viable option). Restoring from GPT via the "Other OS" option which triggers the use of the appliance works very well for other customers and in the Lab.
Gostev
Chief Product Officer
Posts: 31766
Liked: 7268 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: B&R 5.02 - FLR not possible due to NFS Mount problem

Post by Gostev »

Re: 1st question, you need to work with support on that. I cannot troubleshoot any issues over forum posts. If you are not happy with support, you can always request a callback from support manager, and he will take it from there. But as you said yourself, this looks like environmental, not a product issue - as it does not happen with other customers. May be this is why our support cannot assist here.

Re: 2nd question, this is currently by design. We assume that mounts will be reused constantly, because restores are done all the time in larger environments. Creating new mount takes more time, also many customers like to create the mount manually for various reasons. We can consider adding optional autodismount down the road, but frankly, this is the 1st request in all this time since vPower NFS was first introduced.

Re: GPT disks. Windows FLR from GPT disks is not supported. This is clearly stated in the product's System Requirements. We are, however, adding support for that in v6.
stevenrodenburg1
Expert
Posts: 135
Liked: 20 times
Joined: May 31, 2011 9:11 am
Full Name: Steven Rodenburg
Location: Switzerland
Contact:

Re: B&R 5.02 - FLR not possible due to NFS Mount problem

Post by stevenrodenburg1 »

Re: 1st question, ok. I understand. Just trying to help my customer.

Re: 2nd question. I know that and i understand and agree to that design decision. But what i see a lot in the field, and what happens at this customers site too, are the mountings of additional NFS datastores with (1) (2) (3) etc. etc. behind them (the default behaviour of vSphere to handle double datastore names). The degree to which this happens varies greatly. Some customers see it happen a lot, some seldomly.
A feature called "remove stale nfs mounts" or something similar would be very helpfull. Afterall, Veeam knows what datastore it had vCenter mount in the past and with this knowledge, can clean them up too (via vCenter ofcourse).
May i hereby request such a feature, in addition to your own suggestion?

Thanks for your time and i wish you all the best,
Steven Rodenburg
uniQconsulting Switzerland
tsightler
VP, Product Management
Posts: 6033
Liked: 2859 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: B&R 5.02 - FLR not possible due to NFS Mount problem

Post by tsightler »

Gostev wrote:Re: 2nd question, this is currently by design. We assume that mounts will be reused constantly, because restores are done all the time in larger environments. Creating new mount takes more time, also many customers like to create the mount manually for various reasons. We can consider adding optional autodismount down the road, but frankly, this is the 1st request in all this time since vPower NFS was first introduced.
This isn't exactly true. I asked for this feature quite some time ago, and I've heard this complaint more than once from other Veeam users, although perhaps not on this forum. In larger environments you attempt to get users to use only a specific host for restores, however, over time you end up with the Veeam NFS mount all over the place. Over time there would sometimes be two of them (a greyed out lost connection, and a new one). For most hosts it's an annoyance and should not be there. I wish there was an option to clean up the mount after it's job is done, although I do understand it can be complicated to track (could be other jobs using the mount). I found myself cleaning up the old mounts every few months at least. It's not a functionality impacting issue, but just a matter of being as clean of an implementation as possible.
stevenrodenburg1
Expert
Posts: 135
Liked: 20 times
Joined: May 31, 2011 9:11 am
Full Name: Steven Rodenburg
Location: Switzerland
Contact:

Re: B&R 5.02 - FLR not possible due to NFS Mount problem

Post by stevenrodenburg1 »

In larger environments you attempt to get users to use only a specific host for restores, however, over time you end up with the Veeam NFS mount all over the place. Over time there would sometimes be two of them (a greyed out lost connection, and a new one). For most hosts it's an annoyance and should not be there
That is exactly what i meant. And at this current customer, these "left overs" are causing their problem (is my assumption, i'm gladly proven wrong). We cleaned up all ESX Hosts already but Veeam LFR keeps generating a mess. Sorry to put it in those words :-)
When we run FLR a couple of time, Veeam generates those (1) (2) (3) mounts again on some hosts. On other ESX hosts Veeam nicely re-uses the NFS mount that's already there.
I cannot grasp the reason why it does this on some hosts and not on others. I don't see a pattern.

Anyway, we will work with Veeam Support on this. I know they are doing their best and are showing a great amount of goodwill. I don't want to hurt anyones feelings. It's just that it sometimes helps to involve a larger community with such difficult issues to shed some new light and fresh ideas on a case.
Post Reply

Who is online

Users browsing this forum: Bing [Bot] and 87 guests