Hi, having some issues here.
We have a 165tb repo utilizing xfs and fastclone. Our backups are only about 65tb. They are also sent to another site via backup copy jobs.
We purchased a new array. We attempted to use rsync to move the 65tbs to the new repo (both repos are attached to the same host via sas) and we ran out of space. The new array has more raw capacity but due to a limitation in the array a single volume can only be 120TB. That is technically enough, but from the limited info I found on this it sounds like I might need to use dd and do a block level copy. What I have read suggests that is only possible with same size target and destinations, which isn't something we will be able to do as the original source is bigger than the new target, though both are sufficient.
What is the best method of moving the backups to a new array? Is there a function within veeam we can use to accomplish this vs using a linux command?
Bearing in mind we really want to move the primary site repo without breaking the replication jobs or needing to reseed them, which is simply not possible over the wire..
-
- Enthusiast
- Posts: 42
- Liked: 5 times
- Joined: May 17, 2018 2:22 pm
- Full Name: grant albitz
- Contact:
-
- Service Provider
- Posts: 25
- Liked: 16 times
- Joined: Oct 29, 2014 9:41 am
- Full Name: Olafur Helgi Haraldsson
- Location: Iceland
- Contact:
Re: XFS Fastcloned Repo replacement
Hi,
If your volume is a LVM volume you can use pvmove to migrate the extents to the new array on a block level...
ish:
pvcreate /dev/mapper/newstoragearraylun_same_or_bigger_first120TB
pvcreate /dev/mapper/newstoragearraylun_same_or_bigger_second45TB
vgextend old_volumegroup /dev/mapper/newstoragearraylun_same_or_bigger_first120TB
vgextend old_volumegroup /dev/mapper/newstoragearraylun_same_or_bigger_second45TB
pvmove -b /dev/mapper/oldstoragearray:1-XXX0 /dev/mapper/newstoragearraylun_same_or_bigger_first120TB
pvmove -b /dev/mapper/oldstoragearray:XXX1-ZZZZ /dev/mapper/newstoragearraylun_same_or_bigger_second45TB
Monitor the status with
lvs -a -o+devices
When the copy status is done, and you are sure that /dev/mapper/oldstoragearray is emty (no extent on the volume) you can remove the unused device from the VG group
vgreduce old_volumegroup /dev/mapper/oldstoragearray
and remove it as it is being decommisioned:
pvremove /dev/mapper/oldstoragearray
In Veeam v12 you will be able to do this with an internal reflink-awere command in the UI if you can wait few months
If your volume is a LVM volume you can use pvmove to migrate the extents to the new array on a block level...
ish:
pvcreate /dev/mapper/newstoragearraylun_same_or_bigger_first120TB
pvcreate /dev/mapper/newstoragearraylun_same_or_bigger_second45TB
vgextend old_volumegroup /dev/mapper/newstoragearraylun_same_or_bigger_first120TB
vgextend old_volumegroup /dev/mapper/newstoragearraylun_same_or_bigger_second45TB
pvmove -b /dev/mapper/oldstoragearray:1-XXX0 /dev/mapper/newstoragearraylun_same_or_bigger_first120TB
pvmove -b /dev/mapper/oldstoragearray:XXX1-ZZZZ /dev/mapper/newstoragearraylun_same_or_bigger_second45TB
Monitor the status with
lvs -a -o+devices
When the copy status is done, and you are sure that /dev/mapper/oldstoragearray is emty (no extent on the volume) you can remove the unused device from the VG group
vgreduce old_volumegroup /dev/mapper/oldstoragearray
and remove it as it is being decommisioned:
pvremove /dev/mapper/oldstoragearray
In Veeam v12 you will be able to do this with an internal reflink-awere command in the UI if you can wait few months
-
- Enthusiast
- Posts: 42
- Liked: 5 times
- Joined: May 17, 2018 2:22 pm
- Full Name: grant albitz
- Contact:
Re: XFS Fastcloned Repo replacement
no lvm, just a /dev/ mounted as xfs.
also I am a little unclear on what you would have been doing there even if it was lvm.
we have 1x 165tb disk for current production and a new 120tb disk, I see you referring to first 120tb, but nothing like that exists, there is a conflict in size new vs old, unfortunately. I don't see you doing anything to separate the space by size other then referring to them by name as if they were sized like that.
also I am a little unclear on what you would have been doing there even if it was lvm.
we have 1x 165tb disk for current production and a new 120tb disk, I see you referring to first 120tb, but nothing like that exists, there is a conflict in size new vs old, unfortunately. I don't see you doing anything to separate the space by size other then referring to them by name as if they were sized like that.
-
- Enthusiast
- Posts: 42
- Liked: 5 times
- Joined: May 17, 2018 2:22 pm
- Full Name: grant albitz
- Contact:
Re: XFS Fastcloned Repo replacement
I do see how this item is specifically addressed in Veeam 12, we have a little bit of time but not sure we have that much. I do have a ticket open: 05629209. Maybe we can get on the short list of trying it out early or something. Doing an rsync of the data took over a week. Even if we could dd this with same sized volumes i cant pause backups for a week while the copy goes. New Active fulls to the new repo is looking like my only option.
Who is online
Users browsing this forum: Google [Bot], linnan9111, Semrush [Bot] and 78 guests