- Posts: 151
- Liked: 17 times
- Joined: Apr 26, 2013 4:53 pm
- Full Name: Dan Swartzendruber
So, I have a CentOS7 server, which I use as a backup repository. It runs in a pacemaker cluster with another host, also running CentOS7. So I wanted to do an upgrade on the primary storage server. I logged into the cluster management GUI and told it to put the node into standby. What that would do is:
Deactivate the virtual IP it uses so vsphere can access virtual machine files on that datastore.
Export the ZFS pool.
The backup storage server would then:
Import the ZFS pool.
Activate the virtual IP for NFS.
I was surprised to see it say 'export failed!'. I tried to do so manually, and got an error that the filesystem in question was 'in use'. After some poking around, I discovered a couple of orphaned bash processes running perl scripts (I assume these are the datamovers or some such?) I killed them manually, and was then able to migrate the cluster resource to the backup node. I *think* the orphaning happened because the windows server 2008R2 VM that runs the Veeam software was rebooted in the middle of the night due to a windows update. So, my question: is there any way to make sure this doesn't happen again? e.g. is there some setting or something that will make these processes go away? I guess I can check periodically, but this is a real bummer, since if this happens, it ensures that the cluster software can't fail over in an unattended scenario (which is the whole point of running the cluster software.) Obviously, I can check every morning to make sure there are no orphan processes, but that is a real drag. If need be, I will install the NFS client software on the proxy, and access the backup repository via a shared folder. Thanks!
- Veeam Software
- Posts: 17931
- Liked: 1512 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
Users browsing this forum: No registered users and 11 guests