Discussions specific to the VMware vSphere hypervisor
Post Reply
pkelly_sts
Expert
Posts: 577
Liked: 64 times
Joined: Jun 13, 2013 10:08 am
Full Name: Paul Kelly
Contact:

Observation - Evacuating extent to folder on same drive is slower than expected

Post by pkelly_sts » Sep 28, 2018 9:40 am

I'm in the midst of a tidy up of our repositories and one of the things I'm doing is effectively renaming some of them to make it more obvious at initial glance exactly where they sit (i.e. on which drives) so that it's easy to see if/when storage is well balanced.

You can rename a repository and SoBR, but you seemingly can't rename/re-point the underlying folder that a repository/extent points to.

So, in my case I've simply created a replacement folder on the same drive (with a more sensible/meaningful name) added it as a repository, then added that repository as an extent to the existing SoBR, then finally evacuate the extent I want to decommission.

I expected Veeam to realise the data is all on the same drive and so perform a relatively quick on-disk-move, but sitting here 20 mins later it's still hammering away at the disks looking like a traditional copy/paste (so the disk is getting hammered with reads & writes at the same time).

The only logical explanation that springs to mind is that VBR is perhaps ensuring integrity by copying first then deleting the source, however surely this step isn't necessary on the same drive as there's probably more risk with all the read/write activity than there would be with a simple on-disk move?

Something that has also struck me whilst writing this is that doing the copy/paste/delete process will also require disk space - I'm hoping it's intelligent enough to do this in a staged fashion rather than doing the delete at the end of the process otherwise I'm going to start getting pretty tight on disk space!

So, is there room for more efficient working at a local-disk level in such activities?

(Failing all of this, having the means to simply rename an underlying folder in a supported/clean fashion would save having to do any of the entire process above).

foggy
Veeam Software
Posts: 18642
Liked: 1628 times
Joined: Jul 11, 2011 10:22 am
Full Name: Alexander Fogelson
Contact:

Re: Observation - Evacuating extent to folder on same drive is slower than expected

Post by foggy » Sep 28, 2018 2:00 pm

Hi Paul, it is expected that entire data is copy/pasted during evacuation, there's no specific logic for just moving pointers on the file system (I guess this is what you'd like to see). The space required depends on the amount of files/repository slots - already copied files are immediately deleted from source folder, but if there are enough slots to copy everything in parallel, you'd need twice amount of space.

Andreas Neufert
Veeam Software
Posts: 4074
Liked: 742 times
Joined: May 04, 2011 8:36 am
Full Name: Andreas Neufert
Location: Germany
Contact:

Re: Observation - Evacuating extent to folder on same drive is slower than expected

Post by Andreas Neufert » Sep 30, 2018 9:53 pm

If you move data manually (on OS level with move process) to another extend, Veeam should detect it and should work with it.
veeam-backup-replication-f2/move-a-job- ... 51211.html


If you transport the data manually (on OS level with move process) to another Repository of same type (Repository => Repository) (SOBR=>SOBR) you can afterwards go to the Veeam Job and select the other repository. Veeam will check then if the data is completely there and will work afterwards as usual.
More details here: https://www.veeam.com/kb1729


And to complete this: https://www.veeam.com/kb2236

In all cases, inactivate the Jobs, so that no data is currently accessed.
Rescan the Repository/SOBR afterwards, so that Veeam detect the change.

pkelly_sts
Expert
Posts: 577
Liked: 64 times
Joined: Jun 13, 2013 10:08 am
Full Name: Paul Kelly
Contact:

Re: Observation - Evacuating extent to folder on same drive is slower than expected

Post by pkelly_sts » Oct 01, 2018 8:39 am

Ah thanks, Andreas. I was coming here this morning to say that the in-app move was EXTREMELY slow (it had moved just 137Gb in approx 80 mins or so).

So in my case reading the above links means moving an extent of a SoBR should actually be pretty straightforward (I know there are certain limitations with SoBR which means they can't always be treated identical to simple repo's so didn't want to assume).

Will be giving it a manual go today.

Post Reply

Who is online

Users browsing this forum: FrancWest and 25 guests