-
- Novice
- Posts: 5
- Liked: 1 time
- Joined: Mar 31, 2016 12:57 pm
- Contact:
Evacuation of an extend in a scale out rep
Hi,
I have a question regarding the evacuation of an extend repository, which is part of a scale out rep.
I was under the impression that the evacuation would be non-disruptive, but I can see from a customer, where we are performing this, that it is not.
I've noticed the following:
1) The migration is using infrastructure resources along side normal operations, meaning that it will cause normal backup to wait longer for resources to become available. It appears that the migration is more aggressive or has a higher priority, since it grabs most resource overall. (Eg. if there is a limit maximum concurrent tasks of 10, then the migration will take most of it). Is there any way to 'reserve' some resources for backup jobs?
2) A backup cannot complete if a certain amount of files is not migrated - for the main backup job, I got the error:
"Unable to allocate processing resources. Error: Some extents storing previous backup files are in maintenance mode"
Can this be alleviated by using the option: perform full backup when required extent is offline ? If so, will this be merged after migration is complete?
The repository was made with v8, so it does not 'Use pr VM backup files', meaning there are some big files present, which probably are not helping the overall migration.
Incidentally, if I take an existing v8 repository and mark it with 'Use pr VM backup files', when and how will it move from the single large file to smaller vm based files?
Fortunately, it appears that the migration for this customers repo will take less then 24 hours, however, I would like to know more about this in case we have to do the same on a larger customer.
Thanks in advance.
I have a question regarding the evacuation of an extend repository, which is part of a scale out rep.
I was under the impression that the evacuation would be non-disruptive, but I can see from a customer, where we are performing this, that it is not.
I've noticed the following:
1) The migration is using infrastructure resources along side normal operations, meaning that it will cause normal backup to wait longer for resources to become available. It appears that the migration is more aggressive or has a higher priority, since it grabs most resource overall. (Eg. if there is a limit maximum concurrent tasks of 10, then the migration will take most of it). Is there any way to 'reserve' some resources for backup jobs?
2) A backup cannot complete if a certain amount of files is not migrated - for the main backup job, I got the error:
"Unable to allocate processing resources. Error: Some extents storing previous backup files are in maintenance mode"
Can this be alleviated by using the option: perform full backup when required extent is offline ? If so, will this be merged after migration is complete?
The repository was made with v8, so it does not 'Use pr VM backup files', meaning there are some big files present, which probably are not helping the overall migration.
Incidentally, if I take an existing v8 repository and mark it with 'Use pr VM backup files', when and how will it move from the single large file to smaller vm based files?
Fortunately, it appears that the migration for this customers repo will take less then 24 hours, however, I would like to know more about this in case we have to do the same on a larger customer.
Thanks in advance.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Evacuation of an extend in a scale out rep
I would say this is expected. In some cases you would want to evacuate the backups as soon as possible, anticipating storage corruption.b.russel wrote:1) The migration is using infrastructure resources along side normal operations, meaning that it will cause normal backup to wait longer for resources to become available. It appears that the migration is more aggressive or has a higher priority, since it grabs most resource overall. (Eg. if there is a limit maximum concurrent tasks of 10, then the migration will take most of it). Is there any way to 'reserve' some resources for backup jobs?
This option takes effect only in cases where extent is actually offline.b.russel wrote:2) A backup cannot complete if a certain amount of files is not migrated - for the main backup job, I got the error:
"Unable to allocate processing resources. Error: Some extents storing previous backup files are in maintenance mode"
Can this be alleviated by using the option: perform full backup when required extent is offline ? If so, will this be merged after migration is complete?
After the next active full backup.b.russel wrote:Incidentally, if I take an existing v8 repository and mark it with 'Use pr VM backup files', when and how will it move from the single large file to smaller vm based files?
-
- Novice
- Posts: 5
- Liked: 1 time
- Joined: Mar 31, 2016 12:57 pm
- Contact:
Re: Evacuation of an extend in a scale out rep
Hi foggy,
Thanks for a quick response.
Thanks
Thanks for a quick response.
There's no doubt that the faster an extend can be evacuated, the better. However, when the evacuation takes longer than the time between 2 backup cycles using the rep, then the 2 will collide and it would be a good idea if that could be controlled. If the evacuation depletes all resources, then please mention it in the docs, so the behavior is known. Best option, in my mind, would be an option to set the evacuation to lowest priority, so it wouldn't block normal operation backups etc.I would say this is expected. In some cases you would want to evacuate the backups as soon as possible, anticipating storage corruption.
I was expecting this behavior, thanks for confirming it. Now, it doesn't seem like it, but are the files evacuated intelligently? By intelligently, I mean files, which are needed by the next backup cycle are copied first.This option takes effect only in cases where extent is actually offline.
Thanks
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Evacuation of an extend in a scale out rep
There's no such logic.b.russel wrote:Now, it doesn't seem like it, but are the files evacuated intelligently? By intelligently, I mean files, which are needed by the next backup cycle are copied first.
-
- Novice
- Posts: 5
- Liked: 1 time
- Joined: Mar 31, 2016 12:57 pm
- Contact:
Re: Evacuation of an extend in a scale out rep
Ok, would have been a nice feature.
Are there any best practices, when faced with an evacuation of a large rep and wanting to keep it as a non-disruptive operation?
I guess performing a full active backup cycle will be desirable on a v8 or non 'Use pr VM backup files' repository - to break up the large files and thus ensuring a higher chance of success pr vm.
Are there any best practices, when faced with an evacuation of a large rep and wanting to keep it as a non-disruptive operation?
I guess performing a full active backup cycle will be desirable on a v8 or non 'Use pr VM backup files' repository - to break up the large files and thus ensuring a higher chance of success pr vm.
-
- Veeam Software
- Posts: 21139
- Liked: 2141 times
- Joined: Jul 11, 2011 10:22 am
- Full Name: Alexander Fogelson
- Contact:
Re: Evacuation of an extend in a scale out rep
Evacuation is considered a maintenance task and maintenance usually implies some sort of downtime, so just keep that in mind when planning such activity and carefully select the time to perform it (for example, you can painfully do this on weekend, if your jobs do not run). Anyway, thanks for your feedback, it sounds reasonable.
-
- Enthusiast
- Posts: 54
- Liked: 18 times
- Joined: Feb 02, 2015 1:51 pm
- Contact:
Re: Evacuation of an extend in a scale out rep
It would be a really nice feature to evacuate a Repository non-disruptively, especially for those of us that are pampered by Storage-vMotion: I can just fine move everything off a Datastore without disrupting even the running service of the VMs in question and then take that storage offline etc. I expected I could do that with scale out repository, but, alas, it's not the case, because Evacuation needs "Maintenance Mode" and that in turn is like "Offline" instead of "Read-Only"...
-
- Enthusiast
- Posts: 54
- Liked: 18 times
- Joined: Feb 02, 2015 1:51 pm
- Contact:
Re: Evacuation of an extend in a scale out rep
I just saw the following posts:
veeam-backup-replication-f2/selective-e ... ml#p180523
veeam-backup-replication-f2/migrate-a-p ... 33345.html
It seems like you can move around individual Backup job directories and even individual Backup Chains or even single Files between Extents of a scale out repository. That makes it quite easy to move files around and to manually evacuate a repository with minimal disruptiveness (e.g. move a single VM backup chain at a time, rescan).
Although "Evacuate Repository" could (and should?) automate this task for us
veeam-backup-replication-f2/selective-e ... ml#p180523
veeam-backup-replication-f2/migrate-a-p ... 33345.html
It seems like you can move around individual Backup job directories and even individual Backup Chains or even single Files between Extents of a scale out repository. That makes it quite easy to move files around and to manually evacuate a repository with minimal disruptiveness (e.g. move a single VM backup chain at a time, rescan).
Although "Evacuate Repository" could (and should?) automate this task for us
-
- Enthusiast
- Posts: 54
- Liked: 18 times
- Joined: Feb 02, 2015 1:51 pm
- Contact:
Re: Evacuation of an extend in a scale out rep
I just found out that you'd better leave the "<jobname>.vbm" file in place. It seems that something goes missing when you move that file manually. I'll report what happens with "Maintenance Mode" and "Evacuate Extent" when I've moved all my Backup chains to the second Extent in a temporary location...einhirn wrote: It seems like you can move around individual [...] Files between Extents of a scale out repository. [...]
Who is online
Users browsing this forum: Bing [Bot], Google [Bot], Semrush [Bot] and 121 guests