Comprehensive data protection for all workloads
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: V10 & XFS - all there is to know (?)

Post by Gostev » 2 people like this post

I would like to add to what Andreas said above.

Veeam started as a very-SMB vendor 15 years ago, and reversed incremental backup mode checked all the boxes for vSMB customers:
- No multiple full backups (because they didn't have enough storage to store for those)
- Latest backup file is always the full backup file (very easy to manage: copy, transport etc.)
- Predictable backup window (each run is incremental, so there are no days when backup takes 10x longer due to the full backup)
While performance did not really matter as much due to the size of those tiny environments, and large backup windows available since almost no SMB business requires that performance SLAs are met 24/7. Also, retention policy were typically extremely simple in these environments: practically always just last 7-14 days worth of backups.

However, as Veeam moved up market, we saw virtually all larger customers requiring GFS retention due to internal policies or external regulations. This is when we started looking for an space-efficient way to store those multiple GFS full backups, and as the result came out with the whole ReFS/XFS integration and fast cloning.

This is the very reason why the current default backup mode is forward incremental with period synthetic fulls. We also guide users to ensure they leverage fast cloning, for example you will get a warning if you create a Windows-based repository on a volume that is not ReFS.

Having said that, even back just a couple of years ago @tsightler liked to repeat how at least some of his largest enterprise customers love reverse incremental backup based on their backup storage and other requirements, and that we should not even think of discontinuing it :D I wonder if ReFS/XFS has changed this lately Tom?
orb
Service Provider
Posts: 126
Liked: 27 times
Joined: Apr 01, 2016 5:36 pm
Full Name: Olivier
Contact:

Re: V10 & XFS - all there is to know (?)

Post by orb » 2 people like this post

kspare wrote: Jan 07, 2021 5:11 pm our storage servers are synology RS3617RPxs with 11 8tb barracuda pro running raid 5, and 2 1tb ssd cache drives in read only mode with 10gb networking...it really shouldn't be this slow. but it keeps indicating that the source is the bottle neck with the new xfs link clone volume....
11*8TB / RAID5,seagate barracuda which are desktop class drive. I hope you have a second copy somewhere.

Oli
tsightler
VP, Product Management
Posts: 6009
Liked: 2842 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: V10 & XFS - all there is to know (?)

Post by tsightler » 7 people like this post

@Gostev Since block clone was first introduced over 4 years ago and has reached quite high levels of stability for most deployments over the last 2-3 years, save some pretty good bumps with Windows 2019 early adopters, forward incremental with synthetic full is the go-to recommended architecture, perhaps occasionally with straight forever incremental based on retention requirements. At this point I personally can't think of any large clients still using reverse incremental as a matter of course, and most of the corner case uses that did exist (for example, offloading daily fulls to tape) have been addressed. I consider reverse incremental a legacy backup mode at this point and it's certainly not part of our standard designs.
lolbebis
Enthusiast
Posts: 26
Liked: 5 times
Joined: Feb 26, 2020 9:33 am
Full Name: Mattias Jacobsson
Contact:

Re: V10 & XFS - all there is to know (?)

Post by lolbebis »

We are using reverse incrementals to flash disk and tier the increments to on prem S3 that means we can increase the retention almost litmitless without using more space on the primary flash backup system. (The S3 system has lots of disk for PACS images so our backups are more less a drop in the ocean).
I think that works very good for us.

I dont know how implementing forward incremental with synthetic full would affect that setup, but I asume that we would have to send full backups to the S3 storage aswell then?
aich365
Service Provider
Posts: 296
Liked: 23 times
Joined: Aug 10, 2016 11:10 am
Full Name: Clive Harris
Contact:

Re: V10 & XFS - all there is to know (?)

Post by aich365 »

We use Reverse Incrementals where customers want a Daily Tape backup - This ensures the latest RP is copied without having the overhead of synthesizing
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: V10 & XFS - all there is to know (?)

Post by Gostev »

@lolbebis backup mode makes no difference to the object storage offload engine, it is always forever-incremental. You would not be able to "send full backups" even if you wanted.

@aich365 there's definitely no "overhead of synthesizing" in v10, and in fact it is completely the opposite: this approach performs a few times faster than copying an existing backup file to tape. But in any case, reversed incremental is not the requirement for your approach, as any backup mode with periodic fulls would do - you just have to time those fulls to the day when tape export happens.
aich365
Service Provider
Posts: 296
Liked: 23 times
Joined: Aug 10, 2016 11:10 am
Full Name: Clive Harris
Contact:

Re: V10 & XFS - all there is to know (?)

Post by aich365 »

Hi Gostev
Thanks for the response. The problem with the tape job waiting on a "periodic full" is if the PF fails to be created the Tape job will try and synthesize the last RP (or sit waint for the PF until it times out.
In a Reverse Incremental job the last RP is always a Full.

Regards
tsightler
VP, Product Management
Posts: 6009
Liked: 2842 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: V10 & XFS - all there is to know (?)

Post by tsightler » 1 person likes this post

Daily full backup to tape is indeed the most common use case I've seen for reverse incremental these days, but synthesized full to tape daily has largely addressed this use case now that it performs so well in v10 and beyond. You don't have to synthesize the full on disk for this function to work.
DonZoomik
Service Provider
Posts: 368
Liked: 120 times
Joined: Nov 25, 2016 1:56 pm
Full Name: Mihkel Soomere
Contact:

Re: V10 & XFS - all there is to know (?)

Post by DonZoomik » 1 person likes this post

IMHO reverse incremental also works better for long chains (100+ RP) without sythetic fulls. With forever incremental, jobs spend a looong time (and a lot of read IO) reading embedded metadata in VIBs and VBK, before the job actually starts. With some basic testing, this start slowdown was noticeable at already 20-30 RPs (if I were to do weekly synthetics) - just some numbers from top of my head, don't remember details. I had a support case for this when forever incremental job did several times more read IO on repository than total writes.
furyflash
Service Provider
Posts: 13
Liked: 5 times
Joined: Dec 20, 2016 8:16 am
Full Name: Alexander Kozlov
Contact:

Re: V10 & XFS - all there is to know (?)

Post by furyflash »

Hi,
I am trying to convert traditional repository to XFS (Fast clone). We have one backup job with 14 TB full backup file and 30 incremental files 1-2 TB size. We can't create active full backup for this job (slow channel). we are trying additional option with compact full.
Following on recommendation of the veeam support. we have moved full backup file to the XFS repository and run compact full file. It takes 8 days and didn't help. Then veeam support said what we need to move all incremental backups too.
When I copied files to the XFS repository, it's ready for block cloning. I checked it with linux commands. But we need to run compact backup for get some flag on job, why?
Is it possible to change this flag somehow in job file and don't run compact or active full jobs?
JTT
Service Provider
Posts: 99
Liked: 2 times
Joined: Jan 02, 2017 7:31 am
Full Name: JTT
Contact:

Re: V10 & XFS - all there is to know (?)

Post by JTT »

If i deploy a Ubuntu 20.04 LTS repo with XFS today for Veeam V10, then does it mean, that after V11 is released, the same repo gets immutable option automatically or should i redeploy the repo? My main goal is to know, does the Linux repo get upgraded with immutable option or should i wait for the V11 release?
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: V10 & XFS - all there is to know (?)

Post by Gostev » 2 people like this post

You can go ahead and deploy now. There will be a path to convert it to a hardened repository without re-deploying. If I remember correctly, you will need to register the server again with Veeam using the new "single-use credentials", plus run a console command to change existing backup files ownership to the limited account under which the newly deployed persistent components will run. So fairly quick and simple process.

@HannesK sounds like a how-to forum post or a KB article is due!
Riley
Lurker
Posts: 1
Liked: never
Joined: Jan 29, 2021 7:59 pm
Full Name: Riley
Contact:

Re: V10 & XFS - all there is to know (?)

Post by Riley »

So I found the command like to create the xfs file directory from veeam. What I have not found is the underlying systems that peeps are putting xfs on. i.e. processors, raid, etc. I am Exceptionally excited about immutability in 11 and I want to get a head start on creating a system. Those of you with XFS already, what did you set up?? Or is there a post somewhere I am not finding on systems being used??
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: V10 & XFS - all there is to know (?)

Post by Gostev » 1 person likes this post

Any general-purpose server that meets our System Requirements for backup repositories will do, so everyone is using whatever they have :D
Here's the big thread that span many years with people sharing all the different options (XFS does not bring any special requirements).
evilaedmin
Expert
Posts: 176
Liked: 30 times
Joined: Jul 26, 2018 8:04 pm
Full Name: Eugene V
Contact:

Re: V10 & XFS - all there is to know (?)

Post by evilaedmin »

XFS on top of ZFS volumes (zvol) or is that just asking for trouble?
Seve CH
Enthusiast
Posts: 67
Liked: 29 times
Joined: May 09, 2016 2:34 pm
Full Name: JM Severino
Location: Switzerland
Contact:

Re: V10 & XFS - all there is to know (?)

Post by Seve CH »

Riley wrote: Jan 29, 2021 8:09 pm Those of you with XFS already, what did you set up??
During testing, whatever we could found :-)

In production, we run on the iron directly. We wanted to be able to interchange the hardware of the Veeam Server and the Linux repo if we needed to, so the repo server is a bit overkill.
OS: Ubuntu 20.04 LTS Multipath+LVM2+XFS.
Server: HPE DL380 Gen10, 2xIntel Xeon Silver 4215R (2x8 Cores @3.2Ghz), 128GB RAM (1x DIMM per channel-> 2x6 DIMMS @2933Mhz)
Storage: Dell ME4012 with 2xSAS 12Gbit, 30x HD 16TB (480TB RAW) NL-SAS (7200RPM) in a single pool/disk group with dynamic RAID (ADAPT). This is the standard 10x Data + 2x Redundancy + 32TB Spare capacity (=2 disks).
nitramd
Veteran
Posts: 297
Liked: 85 times
Joined: Feb 16, 2017 8:05 pm
Contact:

Re: V10 & XFS - all there is to know (?)

Post by nitramd »

evilaedmin wrote: Feb 06, 2021 10:07 pm XFS on top of ZFS volumes (zvol) or is that just asking for trouble?
Eugene,

Take a look at this thread - it should answer your question: servers-workstations-f49/v11-and-linux-t71075.html

Mr. Sightler is testing what you're asking about.
tsightler
VP, Product Management
Posts: 6009
Liked: 2842 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: V10 & XFS - all there is to know (?)

Post by tsightler » 1 person likes this post

I actually have a full presentation on XFS on ZFS at last years VeeamON. Admittedly this was prior to v11 and the hardened repository, but I still think there can be some additional value to combining both options since ZFS can effectively protect the entire repository and revert it to a prior state easily. The biggest issue with XFS on ZVOL that I've seen is the performance characteristics of ZVOLs themselves. If you can design a ZVOL solution that meets your performance requirements, it's an excellent solution.
orb
Service Provider
Posts: 126
Liked: 27 times
Joined: Apr 01, 2016 5:36 pm
Full Name: Olivier
Contact:

Re: V10 & XFS - all there is to know (?)

Post by orb »

and still one of my favourite! I agree with the performance bit. ZFS or CEPH are great tools but you need to know your subject from the hardware, storage to application. Today, they are not out of box Veeam ready solution but I am confident we see soon some experience shared on the matter.

Oli
pirx
Veeam Legend
Posts: 568
Liked: 72 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: V10 & XFS - all there is to know (?)

Post by pirx »

I see that RHEL 8.2+ is supported for reflinks. I also see that it comes with an rusty 4.18 kernel, which means no iomap which was introduced in 5.4 (https://blogs.oracle.com/linux/xfs-data ... ng-reflink). I can't see if iomap was backported to RHEL 4.18 though (anybody knows how to check this?). At least there is an elrepo 5.4 kernel https://elrepo.org/linux/kernel/el8/x86_64/RPMS/

How important is kernel 5.4/iomap for reflink performance? As RHEL is the distribution that we use for all linux based servers, Ubuntu or any other disti will not be possible.
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: V10 & XFS - all there is to know (?)

Post by Gostev » 1 person likes this post

Testing to this point has shown only minimal difference with kernel 5.4 for most real-world Veeam use cases. So unless you are planning to do something extreme, like keeping a large number of synthetic GFS points, there should not be much difference and earlier kernels will likely meet your performance requirements too.
pirx
Veeam Legend
Posts: 568
Liked: 72 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: V10 & XFS - all there is to know (?)

Post by pirx »

What could me more extreme than using SMB protocol for backing up PT of data... ;)

I guess we are not average size, 1600 VM's our copy repository size would be ~400-500TB, maybe with S3 offloading, but this in mainly done on primary backup target. 10 weeks retention with 14 dailies. Retention time might change and be longer in future. What would you call a large number of GFS points?
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: V10 & XFS - all there is to know (?)

Post by Gostev »

Well, it would be unfair not to call your deployment large :D but the GFS schedule on the other hand is quite short. So from what we know now, I think you will be fine.
pirx
Veeam Legend
Posts: 568
Liked: 72 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: V10 & XFS - all there is to know (?)

Post by pirx »

According to the the Apollo/ReFS thread 192GB RAM would be enough for a 500TB repository with ReFS. What about XFS? Is this also a reasonable number for xfs with reflinks?
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: V10 & XFS - all there is to know (?)

Post by Gostev »

Neither ReFS nor XFS have special RAM requirements associated with them. Just make sure you meet the system requirements for Veeam roles you're going to deploy on the server, and keep in mind they vary depending on the number of concurrent tasks you want to run.
pirx
Veeam Legend
Posts: 568
Liked: 72 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: V10 & XFS - all there is to know (?)

Post by pirx »

I remember the rule of thumb 1 GB RAM per 1 TB used. We're are talking about two systems each with 2x26 cores as copy targets, it's hard to compare this with our current setup with 2x10x100TB SMB shares where every extent has 30 concurrent tasks configured. The gateway servers are used as proxies too (7 servers ~120 cores for everything).
Gostev
Chief Product Officer
Posts: 31457
Liked: 6648 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: V10 & XFS - all there is to know (?)

Post by Gostev »

pirx wrote: Mar 18, 2021 6:23 pmI remember the rule of thumb 1 GB RAM per 1 TB used.
Me too, but I think I still had hair when this rule existed :D it was a temp workaround for a Microsoft bug that was fixed a few years ago.
tsightler
VP, Product Management
Posts: 6009
Liked: 2842 times
Joined: Jun 05, 2009 12:57 pm
Full Name: Tom Sightler
Contact:

Re: V10 & XFS - all there is to know (?)

Post by tsightler » 3 people like this post

Also, a lot of that memory recommendation on Windows was attempts to work around the crazy Windows buffer behavior, which was about 10x worse with ReFS due to various bugs. Now, with v11 bypassing the OS buffer for writes on Windows, memory usage should be more normal. Still I would never recommend to go with less than 4GB per core recommended in best practice, and, arguably 4 GB per planned task (generally core = task, but if you want to oversubscribe cores on your repo, having more memory is the best way to make sure you can do this). Admittedly, 4GB is probably overkill, but if your environment is large, VeeamAgent process can grow quite a bit larger. There can be quite a difference in memory usage when you are backing up 50 VMs with 100GB disk vs 50 VMs with 4TB of disks.

I can pretty much say this, I've had far more customers regret not putting memory in the box vs the other way around! If you want to have less problems, don't skimp on it. Admittedly, I don't think you need to go overboard either. I've had customers buy boxes with ~1TB of RAM, which seems maybe the other extreme! If I was buying a big box like that, I'd probably look at no less than 256GB if it's repo only, 384GB if it's proxy+repo.

Note that the customers I work with are on the larger side, almost all have >10,000 systems being protected, some a LOT more, and many have dozens of PB of data under protection, but the lessons are mostly valid for customers of all sizes.
dcolpitts
Veeam ProPartner
Posts: 119
Liked: 24 times
Joined: Apr 01, 2011 10:36 am
Full Name: Dean Colpitts
Location: Atlantic coast of Canada
Contact:

Re: V10 & XFS - all there is to know (?)

Post by dcolpitts » 1 person likes this post

So to share my XFS setup, I just converted a very small customer from a DL380 Gen9 running Win2016 using BackupExec 20 (to RDX only) to ESXi on a DL380 Gen10 (1 x Xeon 5218, 192GB, 8 x 960GB SSD in R6) with 536FLR-T 10GbE adapter, RDX (via USB passthru to the VBR VM), a Buffalo Terrastation (TS4510R with 4 x 8TB drives in R5) and of course VBR now. The TS5410R has as single 10GbE port and 2 x 1GbE ports. In ESXi I created 2nd vSwitch with just one port of 536FLR-T in and I cross connected the 10GbE port in the TS5410R to it. I created a Ubuntu VM (Server 20.04.2 LTS, 2 vCPUs, 8GB RAM, 40GB thin provisioned drive) with two VMXNET3 NICs - one connected to each vSwitch (VLAN0 and the 2nd one connected to the TS5410R). I provisioned the entire TS5410R as iSCSI and presented it to the Ubuntu VM, which I then formatted with XFS. I then presented this as my Linux repository with immutability.

The customer has around 940GB of data at present in 13 VMs (which is crazy since there was only a single Win2016 in the environment until I came along a few weeks ago). The initial full on the first day processed 940GB, read 904GB, and transferred 704GB in 59m49s at a rate of 280 MB/s. Daily incrementals are around 15GB in 28 minutes at 640 MB/s. Friday night synthetics are completing quicker than I can get up, go to the fridge and get a beer and get back to my desk to see it end (3/13/2021 9:28:25 PM :: Synthetic full backup created successfully [fast clone] - 00m53s).

I currently have 19 restore points so far (I only installed this a couple weeks of ago), and my repository is reporting 2.3TB space used (as shown from the Backup Repositories screen), while my capacity is 16.3TB and my free is 15.2TB (so 1.1TB capacity is used, also what shows when I do a df -h in the Ubuntu shell).

I also keep the the two 1GbE switch ports disabled that the TS5410R is plugged into, so it's management interface is not accessible on the LAN unless I specifically SSH the switch and enable them. In testing (before delivering to the customer), I deleted the Ubuntu VM (as if I were an attacker), and I was able to reinstall and gain access back to the iSCSI volume within 20m, and from there I was able to access my backups.

Overall, I'm pretty happy with the performance so far, and it was a cost effect solution for the customer on a limited budget (normally I'd just toss a StoreOnce 3520 at it, but $3000 vs $30000 makes a huge difference).

I am however currently struggling to figure out how to display the XFS statistics for dedup though from the Ubuntu console - if anyone knows and would share, I'd appreciate it.

dcc
soncscy
Veteran
Posts: 643
Liked: 312 times
Joined: Aug 04, 2019 2:57 pm
Full Name: Harvey
Contact:

Re: V10 & XFS - all there is to know (?)

Post by soncscy » 1 person likes this post

>(normally I'd just toss a StoreOnce 3520 at it, but $3000 vs $30000 makes a huge difference).

Your post is great, but this is honestly the biggest shocker for me -- is this really what a Storeonce is running nowadays? How are they even justifying it?

Just a small note, I love your setup in general. When you speak about checking the XFS stat for dedup, you mean just checking the savings?

The filefrag command should show that you're using the same extents (use the -v flag with a list of files)

https://unix.stackexchange.com/question ... les-on-xfs

Maybe someone has a built in reporting script, but this should be pretty useful to show the usage.

Else, you can just do some comparison between the output of df on the volume and the repository UsedSpace property from Powershell to get the same idea. IIRC, Veeam will report the written space for block cloning volumes for the Used Space property and Freespace is a real read from the volume: https://www.veeam.com/kb2996 (see point 3)
Post Reply

Who is online

Users browsing this forum: Google [Bot], Paul.Loewenkamp, ThomasIkoula and 189 guests