Comprehensive data protection for all workloads
Post Reply
mkretzer
Veeam Legend
Posts: 1145
Liked: 388 times
Joined: Dec 17, 2015 7:17 am
Contact:

Just another V11 love letter...

Post by mkretzer » 17 people like this post

Since now our backup copy issue is mostly solved it's time to give out more praise about V11.

We backup 4150 VMs per day, most of them in only a few hours. Our ReFS repo server is an ~6 years old Dell R920 system with only 60 cores.
With V10 this system was running at 100 % cpu for hours each day. Now with V11 our CPU usage at the same time is down to 25 - 30 %. And this is after the backup copy hotfix which means this is while the copy job is copying the backups in near-realtime over a 10 GBit link and a tape backup is streaming to 2-4 LTO-8 tapes in the backround (tape server located on repo)!

And don't get me started on RAM usage - The server has 2,2 TB of RAM and at times, 1 TB was really in use. Now the usage does not go much higher as 50 GB!

I already had asked for budget to replace our old R920 with an 128 Core AMD system with 2 TB of RAM and now i can purchase a model with way less cores & RAM - which means that i can use components which are readily avaiable and don't have to wait 80 days to get that processor!

Please give my sincere thanks to the Devs!
Mildur
Product Manager
Posts: 8735
Liked: 2294 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Just another V11 love letter...

Post by Mildur » 1 person likes this post

„Dell R920 system with only 60 cores“ for 4140 vms?
Really impressive 👍
Is this only a backup copy target?
Product Management Analyst @ Veeam Software
mkretzer
Veeam Legend
Posts: 1145
Liked: 388 times
Joined: Dec 17, 2015 7:17 am
Contact:

Re: Just another V11 love letter...

Post by mkretzer » 4 people like this post

No this is our primary repo running windows SAC, also tape server for these 4 LTO8 drives. ~1,1 PB of storage behind it.
3000 of these VMs are backed up in about 3 1/2 hours now.
Mildur
Product Manager
Posts: 8735
Liked: 2294 times
Joined: May 13, 2017 4:51 pm
Full Name: Fabian K.
Location: Switzerland
Contact:

Re: Just another V11 love letter...

Post by Mildur » 1 person likes this post

Cool, this is really impressive. A single repo for 3000 vms in 3 1/2 hours.
Product Management Analyst @ Veeam Software
Gostev
Chief Product Officer
Posts: 31561
Liked: 6725 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Just another V11 love letter...

Post by Gostev »

Thanks Markus for your kind words!
Indeed devs did a lot of work on the engine in V11.

By the way, this above is a nice piece of information for ReFS sceptics too ;)
mkretzer
Veeam Legend
Posts: 1145
Liked: 388 times
Joined: Dec 17, 2015 7:17 am
Contact:

Re: Just another V11 love letter...

Post by mkretzer »

@gostev
Yea about that... ReFS is not much faster in our system now with V11 (... which is not an issue as it worked well for us with SAC) but the much lower memory footprint is interesting.
I found one strange thing: Deletion of old restore points seems to take longer now. Did you throttle that (i know a long time ago that was kind of a feature request from me as deletion of points caused issues even with latest LTSC)?
Gostev
Chief Product Officer
Posts: 31561
Liked: 6725 times
Joined: Jan 01, 2006 1:01 am
Location: Baar, Switzerland
Contact:

Re: Just another V11 love letter...

Post by Gostev »

No, we did not implement anything remotely similar in V11. At least not on purpose :D there's always a chance it could be caused by some other change.
mkretzer
Veeam Legend
Posts: 1145
Liked: 388 times
Joined: Dec 17, 2015 7:17 am
Contact:

Re: Just another V11 love letter...

Post by mkretzer »

Ok because i found this for ReFS and XFS. And i remember deleting hundrets if TBs of XFS backups in V10 quite fast as we tested XFS stability.
I'll talk to support!
pirx
Veteran
Posts: 573
Liked: 75 times
Joined: Dec 20, 2015 6:24 pm
Contact:

Re: Just another V11 love letter...

Post by pirx »

mkretzer wrote: Jun 06, 2021 7:40 pm We backup 4150 VMs per day, most of them in only a few hours. Our ReFS repo server is an ~6 years old Dell R920 system with only 60 cores.
What are your concurrent task slots configured on this 60 core server? Is this server repo and proxy sever, or just repo? Even if it's just a repo server it sounds like a extremely high overcommitment rate.

Still v10 here: I've a two new high density repo server here each with 52 cores and 2 SOBR extents (1x backup, 1 x copy), task slots are set to 55 + 40, no proxy tasks yet only repo. I only see a peak of 20% CPU load, so I'm thinking if I can go much higher with the concurrent slot ratio than 1:1 or 1:2 with modern CPU's. This has an impact on the number of server we have to order as we don't need so much disk space.
mkretzer
Veeam Legend
Posts: 1145
Liked: 388 times
Joined: Dec 17, 2015 7:17 am
Contact:

Re: Just another V11 love letter...

Post by mkretzer » 1 person likes this post

We have 10 SOBR extends, each has a task limit of 99 (we had it set to unlimited for a long time, this is more a safety measure). Its only repo, we have 6 linux hotadd proxies with 6 task slots each, so this is our limiting factor. Still, synthetics really use all these task slots.
Post Reply

Who is online

Users browsing this forum: Bing [Bot], Semrush [Bot] and 126 guests