-
- Service Provider
- Posts: 11
- Liked: 1 time
- Joined: Feb 26, 2019 6:57 pm
- Full Name: Thomas L Moore
- Location: Teton Valley, Idaho
- Contact:
DD9800 Optimize for Veeam 11a & SOBR Offload
Hello Veeam Community,
DISCLAIMER: This architecture was "hurt when I got here". I'm aware of Veeam architecture best practice and no, I wouldn't have done it this way. It is what it is and I need to make it better.
I'm hoping to glean real-world experience on the effect of limiting the maximum concurrent tasks to 250 (Dell Recommendation) on a DD 9800, 18 Disk Packs, and DD Boost. Have you found it helpful or hurtful to increase the Veeam load control max concurrent tasks upward of 250? Per the DD System Manager, I'm averaging ~300 DD Boost active connections even with the Veeam limit set to 250. Is there anyone running it unlimited?
Use case is DD9800 as main on-site repository with SOBR offload to Azure. It's the Veeam main repo and it receives simultaneous backup traffic from Veeam, Oracle RMAN using DDBoost and SQL using DDBoost. Offloads are not completing in a 24 hr. period and are being stopped by the next job run and by job runs with syn fulls. We are currently using a 32 CPU as the dedicated Azure gateway, and I am averaging 800ish MiB/s Read from DDBoost.
Can I push this DD9800 any harder? The dedicated gateway was a Veeam support recommendation prior to my arrival. Should I perhaps go back to distributed gateways to Azure? My Azure private endpoint upload is 10G and is not saturated.
Thoughts?
DISCLAIMER: This architecture was "hurt when I got here". I'm aware of Veeam architecture best practice and no, I wouldn't have done it this way. It is what it is and I need to make it better.
I'm hoping to glean real-world experience on the effect of limiting the maximum concurrent tasks to 250 (Dell Recommendation) on a DD 9800, 18 Disk Packs, and DD Boost. Have you found it helpful or hurtful to increase the Veeam load control max concurrent tasks upward of 250? Per the DD System Manager, I'm averaging ~300 DD Boost active connections even with the Veeam limit set to 250. Is there anyone running it unlimited?
Use case is DD9800 as main on-site repository with SOBR offload to Azure. It's the Veeam main repo and it receives simultaneous backup traffic from Veeam, Oracle RMAN using DDBoost and SQL using DDBoost. Offloads are not completing in a 24 hr. period and are being stopped by the next job run and by job runs with syn fulls. We are currently using a 32 CPU as the dedicated Azure gateway, and I am averaging 800ish MiB/s Read from DDBoost.
Can I push this DD9800 any harder? The dedicated gateway was a Veeam support recommendation prior to my arrival. Should I perhaps go back to distributed gateways to Azure? My Azure private endpoint upload is 10G and is not saturated.
Thoughts?
-
- Product Manager
- Posts: 14914
- Liked: 3109 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: DD9800 Optimize for Veeam 11a & SOBR Offload
Hello,
depending on the amount of proxy tasks you configured, unlimited might even lower the performance. But I don't have such a box, so let's see what other customers say.
Best regards,
Hannes
depending on the amount of proxy tasks you configured, unlimited might even lower the performance. But I don't have such a box, so let's see what other customers say.
800 MByte/s read sounds actually good for me for a dedupe appliance. With 250 repository tasks, that's significantly higher than the recommendation (3x CPU cores = 96 tasks). What is your CPU load on the gateway server?We are currently using a 32 CPU as the dedicated Azure gateway, and I am averaging 800ish MiB/s Read from DDBoost.
maybe... I remember best practices (but cannot find it anymore, need to check) that suggest multiple DDboost datastores with multiple gateway servers.Can I push this DD9800 any harder?
Best regards,
Hannes
-
- VP, Product Management
- Posts: 7121
- Liked: 1525 times
- Joined: May 04, 2011 8:36 am
- Full Name: Andreas Neufert
- Location: Germany
- Contact:
Re: DD9800 Optimize for Veeam 11a & SOBR Offload
For Input Prozessing to DataDomain.
Do not set unlimited tasks as it can lead into some background processing issues as it can cause bottlenecks in the infrastructure and so longer processing times than needed.
As the Repository Gateway Server is performing uncompression, you need to use 1 Core per Repo Task Slot. So it depends on what your hardware box looks like for this.
For the offloading of data to Azure it is best practices to not go far bejond 3000 parallel S3 operations. Each task slot performs 64 parallel S3 operations. So I recommend to set the maximum task slots used on the Object Storage Repository to not more than 50. As well look into logs if you see "Busy" warnings of the Object Storage, which would slow down the processing completely. The initial offloading processing will continue whenever there is a task slot available for processing. It will not restart from the beginning if the task slot will go to higher prioritized Jobs and will resume offloading if task slots become available again.
Do not set unlimited tasks as it can lead into some background processing issues as it can cause bottlenecks in the infrastructure and so longer processing times than needed.
As the Repository Gateway Server is performing uncompression, you need to use 1 Core per Repo Task Slot. So it depends on what your hardware box looks like for this.
For the offloading of data to Azure it is best practices to not go far bejond 3000 parallel S3 operations. Each task slot performs 64 parallel S3 operations. So I recommend to set the maximum task slots used on the Object Storage Repository to not more than 50. As well look into logs if you see "Busy" warnings of the Object Storage, which would slow down the processing completely. The initial offloading processing will continue whenever there is a task slot available for processing. It will not restart from the beginning if the task slot will go to higher prioritized Jobs and will resume offloading if task slots become available again.
-
- Service Provider
- Posts: 11
- Liked: 1 time
- Joined: Feb 26, 2019 6:57 pm
- Full Name: Thomas L Moore
- Location: Teton Valley, Idaho
- Contact:
Re: DD9800 Optimize for Veeam 11a & SOBR Offload
I inherited roughly 173 8x16 VM proxies with the DD as the local repo. They are all "automatic selection" gateways to the DD. As a side note I did a 30day usage analysis of the "Number of times a proxy was utilized" during that period, and yes, we are probably overprovisioned in that regard. Task limits on the DD are now set back to 250 from Veeam. Problem is I'm discovering that RMAN DDBoost and SQL DDBoost are also hitting the DD. I suspect this is filling up the task slots on the DD and keeping Veeam out? Going to speak to the Dell SE about this.
The Azure blob is fronted by a 32 x 64 VM with ~13% CPU utilization as the dedicated gateway to a private endpoint at 10G. I see ~800MiB/s consistent read throughput from the DD. The previous dedicated gateway was a 16 x 256 and it ran at ~80% CPU utilization and gave me ~500MiB/s. Makes me think that the bottleneck is elsewhere? Perhaps the SQL Server native backups are consuming all the resources on the DD?
On a final note, it seems that each individual read stream from the DD averages 10 MB/s. Some are 5, some are 15-18, but rarely do I see individual read streams exceed that level. So, that means while I can send a bunch of small files I can only send a max ~400GB file up to Azure in a 24 hr. period. If that's a .vbk from a syn full, then that equates to roughly a VM consumed size of ~ 1.2 TB? The business is not going to accept me telling them that they have to limit their VMs to that size.
The Azure blob is fronted by a 32 x 64 VM with ~13% CPU utilization as the dedicated gateway to a private endpoint at 10G. I see ~800MiB/s consistent read throughput from the DD. The previous dedicated gateway was a 16 x 256 and it ran at ~80% CPU utilization and gave me ~500MiB/s. Makes me think that the bottleneck is elsewhere? Perhaps the SQL Server native backups are consuming all the resources on the DD?
On a final note, it seems that each individual read stream from the DD averages 10 MB/s. Some are 5, some are 15-18, but rarely do I see individual read streams exceed that level. So, that means while I can send a bunch of small files I can only send a max ~400GB file up to Azure in a 24 hr. period. If that's a .vbk from a syn full, then that equates to roughly a VM consumed size of ~ 1.2 TB? The business is not going to accept me telling them that they have to limit their VMs to that size.
-
- Product Manager
- Posts: 14914
- Liked: 3109 times
- Joined: Sep 01, 2014 11:46 am
- Full Name: Hannes Kasparick
- Location: Austria
- Contact:
Re: DD9800 Optimize for Veeam 11a & SOBR Offload
Hello,
After you talked to the Dell SE, I would probably still try out creating multiple repositories on the box and see how it goes.
Best regards,
Hannes
yep, that sounds more like the speed I expected from a dedupe appliance from what I heard from the past. I also heard faster values, but I was actually surprised by 800MByte/s. I mean, that read rate then gets compressed again (data on DD was uncompressed from Veeam perspective) to probably 50%. So on your 10Gbit/s link you probably should see around 3-4Gbit/s used by Veeam. I remember performance limitations on Azure from some years ago, but I hope they are solved over time.read stream from the DD averages 10 MB/s. Some are 5, some are 15-18
upload to capacity tier is incremental forever. Please see the sticky FAQIf that's a .vbk from a syn full, then that equates to roughly a VM consumed size of ~ 1.2 TB?
for sure, task limitation is important with that. because the synthetic fulls will be created by the mount server (the table, first row)syn full,
After you talked to the Dell SE, I would probably still try out creating multiple repositories on the box and see how it goes.
Best regards,
Hannes
Who is online
Users browsing this forum: Majestic-12 [Bot] and 54 guests