-
- Influencer
- Posts: 14
- Liked: 2 times
- Joined: Dec 18, 2019 2:23 pm
- Full Name: Michael Steffes
- Contact:
Backing up remote vSphere host to add larger drives
We own VBR Enterprise Plus and need to put bigger disks in a vSphere server at a remote location. We were contemplating adding a desktop (local to the vSphere host) with a USB 3.0 drive making the machine a backup repository. We would create a virtual machine backup copy job and use the desktop as the backup repository for the job. The current size is 6.7 TB, a USB 3.0 drive can potentially do 245MB/s, we have a gig ethernet link between vSphere host and the desktop. We plan to do this over a 4 day weekend and I calculate it taking just over a day and a half for the initial backup with a conservative 50MB/s write from https://wintelguy.com/transfertimecalc.pl. We will set the USB 3.0 drive to use Windows caching from the drives properties and format it to NTFS. Our plan is to do a full backup a few days prior to the weekend and do a incremental before we remove the drives. We are also contemplating using a second USB 3.0 drive for a backup copy just incase. Once the new drives have been installed and setup as a RAID 10 configuration and formatted we will do a VM restore to the new drives. My question is does this look like a good plan and possible over a 4 day weekend or is there a better solution?
-
- Enthusiast
- Posts: 92
- Liked: 14 times
- Joined: Jan 28, 2011 4:40 pm
- Full Name: Taylor B.
- Contact:
Re: Backing up remote vSphere host to add larger drives
Assuming this is a single, non-clustered host with local storage?
It seems like a feasible plan, but a lot can go wrong. Once you wipe the storage to expand it, your only copy is on those cheap USB disks. And it will be a day or two of copying them all back before you know it worked. I trust Veeam, but maybe not that much!
If I was recommending a customer a solution for this, I would strongly suggest building a second array (external shelf if no more internal slots) and creating a second datastore on that for expansion. Backup - wipe - expand - restore is too much risk.
It seems like a feasible plan, but a lot can go wrong. Once you wipe the storage to expand it, your only copy is on those cheap USB disks. And it will be a day or two of copying them all back before you know it worked. I trust Veeam, but maybe not that much!
If I was recommending a customer a solution for this, I would strongly suggest building a second array (external shelf if no more internal slots) and creating a second datastore on that for expansion. Backup - wipe - expand - restore is too much risk.
-
- Influencer
- Posts: 14
- Liked: 2 times
- Joined: Dec 18, 2019 2:23 pm
- Full Name: Michael Steffes
- Contact:
Re: Backing up remote vSphere host to add larger drives
The existing storage is a RAID 10 using 8 disks that exist locally on the vSphere server. We will be removing the existing drives making note what slots they were in before removal just in case it matters. We are not replacing the RAID controller so in theory if things go south we could reinsert the old drives to get back to where we were. We are not wiping the existing drives. If we go to the extent of building an external shelf we would look at just getting a second server and migrate the VMs over.
Who is online
Users browsing this forum: Bing [Bot], Google [Bot], Semrush [Bot] and 79 guests