Comprehensive data protection for all workloads
pirx
Veteran
Posts: 599
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: V10 & XFS - all there is to know (?)

Post by pirx » 1 person likes this post

This is not directly XFS related, but is Validator available for Linux too? I couldn't find it. Scheduled backup job health checks are not ideal for us as they adds some significant amount of time to the jobs.

So there is currently no way to check files on a Linux repository on demand other than surebackup? I like the idea of checking every block more than just boot a VM and assume that it is working. I know that more checks can be done, but as it is with those tests, they never cover everything.
pirx
Veteran
Posts: 599
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: V10 & XFS - all there is to know (?)

Post by pirx »

My bad, Validator is working with Linux repositories too.

We are about to add new extents and new servers and I've to rearrange the extents. Is there any way to copy/move reflink backups from one fs to another without dehydrating them? cp --reflinks will probably not work between two different fs, xfs_copy / xfsdump might work but all data has to be copied.

In one case I would just need to copy/move existing data to a second xfs fs (new extent) on the same host and then delete some job directories on both extents, so that backups are distributed. I can't leave it there as the second fs will be used as copy extent in future, in the other case I'd need to move them to a fs on a different host.

As those extents are all hardware RAID60 in high density server, it would probably be easier to just take the whole RAID set and physically move it to the other server... As far as I know the RAID information is stored on the disks and the RAID should be visible on the new server.
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: V10 & XFS - all there is to know (?)

Post by Gostev »

There are no known ways to copy/move reflink backups without dehydrating them. I believe there's a thread about this around here where people brainstormed some possible solutions, but nothing usable was found in the end.
pirx
Veteran
Posts: 599
Liked: 87 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: V10 & XFS - all there is to know (?)

Post by pirx »

In my case xfs_copy will work to copy the whole fs, this is a corner case, but I'll give it a shot.
fillerbunnie
Novice
Posts: 4
Liked: never
Joined: Apr 18, 2021 10:22 pm
Full Name: Elage
Contact:

[MERGED] XFS Fast-clone + synthetic fulls.. a fragmentation trap without active fulls?

Post by fillerbunnie »

So we have come across a client who has configured an an off-site job that is currently set to use XFS Fast-clone as they were running into slow merge issues (spinning disk repo).

Now they have forward incremental job with weekly synthetic fulls that has been running very well for the past few weeks it seems. Merges are quick becuase of fast clone and the IT guys are happy, however I'm wondering if this is a potential time bomb for restore performance. As it is an off-site repo, the bandwidth doesn't allow for regular active fulls. This means the source repo is slowly getting more and more fragmented as XFS fast-clone does its work.

In my mind the solution would be to enable "Defragment and compact" on the job, but Veeam doesn't allow you to do this as (I guess?) it is thinking that the synthetic fulls are naturally not fragmented?

I don't know if I'm missing something here, but in my mind you should be able to have Veeam periodically defragment any job that used fast-clone - Over months/years a ~30 day restore point job would get more and more fragmented - obviously Veeam can't defragment every synthetic full without rehydrating the cloned blocks multiple times/using extra space, but re-writing the latest synthetic full backup from the repo *without* using block-clone (then ideally re-targetting the existing cloned blocks to the new location) seems like it would fix the issue. That way the most recent synthetic full would be defragmented and restoration performance would be dramatically improved.

Is the solution to switch to reverse-incremental with a defragmentation schedule? I realise fragmentation is unavoidable when dedupe technologies (including block-cloning) are used, but having your most recent restore points defragmented seems valuable?

I realise this problem may be entirely in my head, so if so appologies for my ignorance. Any thoughts would be appreciated here!
HannesK
Product Manager
Posts: 14836
Liked: 3083 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: V10 & XFS - all there is to know (?)

Post by HannesK »

Hello,
I merged your question to the existing thread. Fragmentation was discussed earlier.

Your backup mode is fine. Nothing "better" comes into my mind. :-)

The WAN connection sounds to me like your bottleneck in any case.

Best regards,
Hannes
mweissen13
Enthusiast
Posts: 93
Liked: 54 times
Joined: Dec 28, 2017 3:22 pm
Full Name: Michael Weissenbacher
Contact:

Re: [MERGED] XFS Fast-clone + synthetic fulls.. a fragmentation trap without active fulls?

Post by mweissen13 » 1 person likes this post

fillerbunnie wrote: Aug 31, 2021 9:02 am I realise this problem may be entirely in my head, so if so appologies for my ignorance. Any thoughts would be appreciated here!
Hi,
well I don't think that this problem is only in your head. But I can tell that after 1.5+ years of using XFS with fast clone that the performance degradation in our case is not too severe. XFS seems to do better in this regard when compared with ReFS. Note: this a totally empirical, not verfied or reproduced optinion of my own :-)
ndz
Lurker
Posts: 1
Liked: never
Joined: Sep 30, 2020 5:13 pm
Contact:

Re: V10 & XFS - all there is to know (?)

Post by ndz »

pirx wrote: Jun 16, 2021 6:05 am In my case xfs_copy will work to copy the whole fs, this is a corner case, but I'll give it a shot.
Looking to see if anybody was successful in implementing a copy with either dd or xfs_copy and can share their experience.
HannesK
Product Manager
Posts: 14836
Liked: 3083 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: V10 & XFS - all there is to know (?)

Post by HannesK »

Hello,
and welcome to the forums.

dd works. I have no information on xfs_copy.

Best regards,
Hannes
crackocain
Service Provider
Posts: 248
Liked: 28 times
Joined: Dec 14, 2015 8:20 pm
Full Name: Mehmet Istanbullu
Location: Türkiye
Contact:

Re: V10 & XFS - all there is to know (?)

Post by crackocain »

Hello

Is Filebackups (NAS Backups) could use Blockclone or reflink?
VMCA v12
Gostev
Chief Product Officer
Posts: 31804
Liked: 7298 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: V10 & XFS - all there is to know (?)

Post by Gostev »

No, this technology applies only to image-level backups. NAS Backup has no notion of a synthetic full backup to start with.
JaySt
Service Provider
Posts: 454
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

[MERGED] XFS restore performance over time when using reflink

Post by JaySt »

i'm doing some research on the output of fio tests suggested by veeam on https://www.veeam.com/kb2014
I'm testing XFS repositories exclusively, running the sequential write test, randomread test and sequential read test with FIO and the parameters suggested in the KB article.
The XFS reflink features are used big time on all repositories.
Repositories are HPE DL380Gen10 with local RAID6 groups, 12Gbps NLSAS drives.

My focus is on the sequential read test at the moment, simulating a simple restore. I'm seeing quite the differences among the repositories i'm testing where i was not expecting the difference to be so significant when looking at the hardware. For example, the difference between two repositories differing only in HDD size (8TB vs 12TB) is too big to explain well. So i went to dig a little deeper in finding the cause of this.

some observations:

i went ahead and did the sequential read test (as documented in the KB) on a fio test file that was freshly created and i saw a big improvement in read performance (throughput) compared to doing the same test on an existing large VBK file.
On the FIO test file , i got between 380 - 450MB/s while on the .vbk file i got 70MB/s

looking at the fragmentation of both files (filefrag <path to file>) , my .vbk file was 600K+ extents and the FIO test file arround 150. Big difference there.

I'm looking for a bit of confirmation about a possible cause of the slow read on the .vbk files. Can reflink cause such fragmentation on the synth full that it would hurt restore performance that much?

are others able to see/test what the sequential read test would do on one of their .vbk files and compare it to their expectations? When selecting the.vbk to test on, one test showed a better performance for a more recent .vbk compared to the oldes one within the folder

used the following to simulate a full restore

Code: Select all

fio --name=seq-read-test  --filename=/VeeamBackups/JobName/VMname.vbk --bs=512k --rw=read --ioengine=libaio --direct=1 --time_based --runtime=600
Veeam Certified Engineer
HannesK
Product Manager
Posts: 14836
Liked: 3083 times
Joined: Sep 01, 2014 11:46 am
Full Name: Hannes Kasparick
Location: Austria
Contact:

Re: V10 & XFS - all there is to know (?)

Post by HannesK » 1 person likes this post

Hello,
yes, fragmentation has impact. The question "how much" is hard to answer and also depends on how much data is on the file system. Copy on write (COW) file systems don't like it in general if they are filled higher than ~80%. You will find similar post for REFS. I merged your question to the existing XFS thread. Fragmentation was discussed earlier.

Simulating a sequential read does not match VBK restore because that would be more or less random read because of fragmentation.

Best regards,
Hannes
JaySt
Service Provider
Posts: 454
Liked: 86 times
Joined: Jun 09, 2015 7:08 pm
Full Name: JaySt
Contact:

Re: V10 & XFS - all there is to know (?)

Post by JaySt »

Ok thanks. Yes, the sequential read test on the fio test file would not simulate VBK restore correctly. Doing the same restore test on the (fragmented) .vbk file itself should come pretty close though.
not really something that can be done about i think, but curious to see some others posting some figures
Veeam Certified Engineer
Post Reply

Who is online

Users browsing this forum: nunciate, Semrush [Bot] and 119 guests